OpenDelta - An Open-Source Framework for Paramter Efficient Tuning.

Overview

An Open-Source Framework for Paramter Efficient Tuning.


OverviewInstallationBasic UsageDocsPerformance

version

Overview

OpenDelta is a toolkit for parameter efficient methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters frozen. By using OpenDelta, users could easily implement prefix-tuning, adapters, Lora, or any other types of delta tuning with preferred PTMs.

Our repo is tested on Python 3.8 and PyTorch 1.9.0. Lower version may also be supported.

A demo of using Opendelta to modify the PLM (E.g., BART). How PLM changes using Delta-tuning

Installation

create a virtualenv (optional)

conda create -n opendelta_env python=3.8
conda activate opendelta_env

Using Pip

Install OpenDelta using pip as follows:

pip install opendelta

To play with the latest features, you can also install OpenDelta from the source.

Build from Source

git clone https://github.com/thunlp/OpenDelta.git
cd OpenDelta

Option 1: If you won't modify the code, run

python setup.py install

Option 2: If you want to modify the code, run

python setup.py develop

Must Try

from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()

Verified Supported Models

  • You can try to use OpenDelta on any backbone models based on PyTorch.

  • However, with small chances thatThe interface of the submodules of the backbone model is not supported. Therefore we verified some commonly used models that OpenDelta are sure to support.

  • We will keep testing more and more emerging models.

  • Pull requests are welcomed when you successfully apply OpenDelta on your own backbone model.

Lora Bias
Tuning
Adapter
Houstbly
Adapter
Preffier
Adapter
Drop
Adapater
Low-Rank
Compactor Prefix
Tuning
Prompt
Tuning
T5
GPT-2
BART
DistilBERT
RoBERTa
BERT
T5-3b(parallel)
Deberta-v2
CTRL
ViT

Performance Checked Combination

Google sheet here

Subject to change at any moment.

Comments
  • 可以像OpenPrompt项目那样提供一些样例代码吗?

    可以像OpenPrompt项目那样提供一些样例代码吗?

    在使用中有一些不太清楚的地方,希望可以有些详细的代码参考,感谢!

    目前遇到的问题:在使用PrefixModel时,指定modified_modules=["0.layer.0"]和不传入modified_modules参数时reparams的参数不一样,是使用的方式不对吗

    modified_modules=["0.layer.0"]时:reparams.control_trans.2: weight:[3072, 512] bias:[3072],而且在model.generate时会报错:The size of tensor a (2) must match the size of tensor b (12) at non-singleton dimension 3

    不传入该参数时:reparams.control_trans.2: weight:[36864, 512] bias:[36864],可以正常generate

    模型使用T5

    question 
    opened by fade-color 4
  • Update basemodel.py

    Update basemodel.py

    Do not use _pseudo_data_to_instantiate, because it can better modify complex model rather than pretrained model from Huggingface only. Because the opendelta now cannot create complex input for complex model, and it will report error. We can see Lora model does not use _pseudo_data_to_instantiate, and we can use Lora in our model. But after we do not use _pseudo_data_to_instantiate, we just modify the model

    opened by CaffreyR 3
  • Is it possible to extract the Visualization module as an independent python packages?

    Is it possible to extract the Visualization module as an independent python packages?

    Visualization(mode).structure_graph() is especially useful to view the large language models, and sometimes I would like to use it in some other scenario.

    So instead of install the whole OpenDelta, is it possible to isolate the Visualization functionality from OpenDelta, then it can become more light-weight and more easily to install ?

    enhancement 
    opened by Dounm 2
  • `index.html` is not included in the package if installing from PyPI

    `index.html` is not included in the package if installing from PyPI

    Thanks for the excellent package.

    Problem

    The index.html file in opendelta/utils/interactive/templates/ is a static file, and it will not be included in the distributed package file (like the wheel file) unless you add the package data manually in setup.py.

    Reproduce

    On a clean environment,

    $ pip install opendelta
    $ python examples/tutorial/0_interactive.py
    
    opened by Spico197 2
  • Differences between Houlsby and Pfeiffer adapters

    Differences between Houlsby and Pfeiffer adapters

    Thanks for providing such a great work here! There are structural differences between Houlsby and Pfeiffer adapters (Houlsby et al. places two adapters sequentially within one layer of the transformer, one after the multi-head attention and one after the FFN sub-layer, while Pfeiffer et al. adapter is inserted only after the FFN “add & layer norm” sub-layer), which seems to be missed in the code.

    question 
    opened by ImKeTT 2
  • Prefix tuning for T5-small

    Prefix tuning for T5-small

    Hi, I met an error when using Prefix tuning with T5-small.

    File "/home/user/anaconda3/lib/python3.7/site-packages/OpenDelta/opendelta/basemodel.py", line 502, in _caller
        args, kwargs = delta_module.pre_forward(*args, **kwargs)
      File "/home/user/anaconda3/lib/python3.7/site-packages/OpenDelta/opendelta/delta_models/prefix.py", line 68, in pre_forward
        kwargs['past_key_value'] = (expand_batchsize(past_key), expand_batchsize(past_value))
      File "/home/user/anaconda3/lib/python3.7/site-packages/OpenDelta/opendelta/delta_models/prefix.py", line 60, in expand_batchsize
        x = x.reshape(self.prefix_token_num, self.num_heads, -1).transpose(0,1)
    RuntimeError: shape '[6, 6, -1]' is invalid for input of size 2048
    

    In T5-small, with 6 heads, it looks not possible to evenly divide 2048 with 6, no matter what num_prefix_token is.

    def expand_batchsize(x):
                x = x.reshape(self.prefix_token_num, self.num_heads, -1).transpose(0,1)
                x = x.unsqueeze(0).expand(batch_size, *x.shape)
                return x
    

    Could you help me with this? Thank you!

    bug 
    opened by chengjiali 2
  • What is the difference between OpenDelta and adapter-transformers?

    What is the difference between OpenDelta and adapter-transformers?

    Hi team, recently I was investigating the method of fine-tuning the PTMs using an adapter(delta) model. I found the functions implemented by OpenDelta and adapter-transformers are similar. Is there any difference between them? Thanks!

    opened by fighterhit 1
  • compatibility with pytorch

    compatibility with pytorch

    Hi. here is another problem. See, I use opendelta and pytorch lightning to fine tune my model using lora. But when I tried to load, it seems wrong since it seems there is state keys missing here. Apparently, it seems not save the LORA weight. @ShengdingHu

    
    def opendelta_modify_with_lora(transformer, config):
        # pass
        LoraModel(backbone_model=transformer, modified_modules=['[r](\d).SelfAttention.[q,v,o,k]'])
        LoraModel(backbone_model=transformer, modified_modules=['[r](\d).EncDecAttention.[q,v,o,k]'])
        delta_model = LoraModel(backbone_model=transformer, modified_modules=['[r](\d).DenseReluDense.w[o,i]'])
    
        delta_model.freeze_module(exclude=["layer_norm", "lora_A", "lora_B"])
        # delta_model.log(delta_ratio=True, trainable_ratio=True, visualization=True)
        # Visualization(transformer).structure_graph();
        return transformer
    
    class EncoderDecoder(LightningModule):
        """
        Encoder Decoder
        """
    
        def __init__(self, config, tokenizer, transformer, dataset_reader):
            """
            :param config
            """
            super().__init__()
            self.config = config
            self.tokenizer = tokenizer
            self.model = transformer
            self.dataset_reader = dataset_reader
    
            self.use_deepspeed = self.config.compute_strategy.startswith("deepspeed")
            self.use_ddp = self.config.compute_strategy.startswith("ddp")
            self.load_model()
    
            self._last_global_step_saved = -1
    
            if self.config.fishmask_mode is not None:
                fishmask_plugin_on_init(self)
    
    model= EncoderDecoder.load_from_checkpoints("my file path")
    

    image

    opened by CaffreyR 1
  • RuntimeError: This is a delta model, which should be attached to a backbone model and can't forward any data by itself. Please using the backbone model's forward function             after attach the delta model to the backbone. eceived was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model: args,kwargs,label_ids,label.

    RuntimeError: This is a delta model, which should be attached to a backbone model and can't forward any data by itself. Please using the backbone model's forward function after attach the delta model to the backbone. eceived was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model: args,kwargs,label_ids,label.

    I used bert model to train on RAFT datasets, the original model went well. But when I tried to add LowRankAdapterModel to finetune, it went wrong. I just simply apply the code in this. @ShengdingHu

    #!/usr/bin/env python
    # coding: utf-8
    
    # In[1]:
    
    
    import datasets
    
    datasets.logging.set_verbosity_error()
    
    
    # In[2]:
    
    
    from datasets import get_dataset_config_names
    
    RAFT_TASKS = get_dataset_config_names("ought/raft")
    RAFT_TASKS
    
    
    # In[3]:
    
    
    from datasets import load_dataset
    
    TASK = "ade_corpus_v2"
    raft_dataset = load_dataset("ought/raft", name=TASK)
    raft_dataset
    
    
    # In[4]:
    
    
    from transformers import AutoTokenizer,Seq2SeqTrainingArguments, TrainerCallback
    tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
    
    from sklearn.model_selection import train_test_split
    X = raft_dataset["train"]['Sentence']
    y = raft_dataset["train"]['Label']
    
    X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
    X_train_tokenized = tokenizer(X_train, padding=True, truncation=True, max_length=512)
    X_val_tokenized = tokenizer(X_val, padding=True, truncation=True, max_length=512)
    
    
    # In[5]:
    
    
    # X_train_tokenized
    
    
    # In[19]:
    
    
    item={}
    for key, val in X_train_tokenized.items():
        if key == 'input_ids':
            item['label_ids']=torch.tensor(val[idx])
        else:
            item[key]=torch.tensor(val[idx])
            
    item
            
    
    
    # In[6]:
    
    
    import torch
    class Dataset(torch.utils.data.Dataset):
        def __init__(self, encodings, labels=None):
            self.encodings = encodings
            self.labels = labels
    
        def __getitem__(self, idx):
    #         item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
            item={}
            for key, val in self.encodings.items():
                if key == 'input_ids':
                    item['label_ids']=torch.tensor(val[idx])
                else:
                    item[key]=torch.tensor(val[idx])
            if self.labels:
                item["label"] = torch.tensor(self.labels[idx]-1)
            return item
    
        def __len__(self):
            return len(self.encodings["input_ids"])
    
    train_dataset = Dataset(X_train_tokenized, y_train)
    val_dataset = Dataset(X_val_tokenized, y_val)
    
    
    # In[7]:
    
    
    train_dataset[0]
    
    
    # In[8]:
    
    
    from transformers import TrainingArguments, Trainer
    from transformers import AutoModelForSequenceClassification,EarlyStoppingCallback
    
    model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
    
    
    # In[9]:
    
    
    from opendelta import Visualization
    Visualization(model).structure_graph();
    
    
    # In[13]:
    
    
    from opendelta import LowRankAdapterModel
    delta_model1 = LowRankAdapterModel(backbone_model=model, modified_modules=['LayerNorm'])
    # delta_model1.freeze_module(set_state_dict = True)
    delta_model1.log(delta_ratio=True, trainable_ratio=True, visualization=True)
    
    from opendelta import LoraModel
    delta_model2 = LoraModel(backbone_model=model, modified_modules=['dense'])
    # delta_model2.freeze_module(set_state_dict = True)
    delta_model2.log(delta_ratio=True, trainable_ratio=True, visualization=True)from opendelta import CompacterModel
    delta_model3 = CompacterModel(backbone_model=model, modified_modules=['dense'])
    # delta_model2.freeze_module(set_state_dict = True)
    delta_model3.log(delta_ratio=True, trainable_ratio=True, visualization=True)
    # In[14]:
    
    
    def compute_metrics(p):
        pred, labels = p
        pred = np.argmax(pred, axis=1)
    
        accuracy = accuracy_score(y_true=labels, y_pred=pred)
        recall = recall_score(y_true=labels, y_pred=pred)
        precision = precision_score(y_true=labels, y_pred=pred)
        f1 = f1_score(y_true=labels, y_pred=pred)
    
        return {"accuracy": accuracy, "precision": precision, "recall": recall, "f1": f1}
    
    # Define Trainer
    args = TrainingArguments(
        output_dir="output",
        evaluation_strategy="steps",
        eval_steps=500,
        per_device_train_batch_size=8,
        per_device_eval_batch_size=8,
        num_train_epochs=3,
        seed=0,
        load_best_model_at_end=True,
    )
    trainer = Trainer(
        model=delta_model1,
    #     model=model,
        args=args,
        train_dataset=train_dataset,
        eval_dataset=val_dataset,
        compute_metrics=compute_metrics,
        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],
    )
    
    # Train pre-trained model
    trainer.train()
    
    
    # TrainOutput(global_step=15, training_loss=0.5652575810750325, metrics={'train_runtime': 11.1754, 'train_samples_per_second': 10.738, 'train_steps_per_second': 1.342, 'total_flos': 4563332366400.0, 'train_loss': 0.5652575810750325, 'epoch': 3.0})
    
    
    

    RuntimeError: This is a delta model, which should be attached to a backbone model and can't forward any data by itself. Please using the backbone model's forward function after attach the delta model to the backbone.

    opened by CaffreyR 1
  • LowRankAdapter not working with Bert models

    LowRankAdapter not working with Bert models

    Ok I am trying to use LowRankAdapterModel with bert-base-uncased and bert-large-uncased and I am getting the following error. Please look into it


    KeyError Traceback (most recent call last) in () 1 from opendelta import LowRankAdapterModel ----> 2 delta_model1 = LowRankAdapterModel(backbone_model=model) 3 delta_model1.freeze_module(set_state_dict = True) 4 delta_model1.log(delta_ratio=True, trainable_ratio=True, visualization=True)

    5 frames /usr/local/lib/python3.7/dist-packages/opendelta/delta_models/low_rank_adapter.py in init(self, backbone_model, reduction_factor, non_linearity, low_rank_w_init, low_rank_rank, modified_modules, exclude_modules, unfrozen_modules, common_structure, interactive_modify) 167 unfrozen_modules=unfrozen_modules, 168 common_structure=common_structure, --> 169 interactive_modify=interactive_modify, 170 ) 171 arg_names = get_arg_names_inside_func(self.init)

    /usr/local/lib/python3.7/dist-packages/opendelta/basemodel.py in init(self, backbone_model, modified_modules, exclude_modules, unfrozen_modules, interactive_modify, common_structure) 130 self.common_structure = common_structure 131 if self.common_structure: --> 132 self.structure_mapping = CommonStructureMap.load(self.backbone_model) 133 else: 134 self.structure_mapping = None

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in load(cls, backbone_model, strict, warining, visualize) 317 if backbone_class not in cls.Mappings: 318 raise KeyError(backbone_class) --> 319 mapping = cls.Mappings[backbone_class] 320 if visualize: 321 logger.info("Since you are using the common structure mapping, draw the transformed parameter structure for checking.")

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in getitem(self, key) 279 raise KeyError(key) 280 value = self._mapping_string[key] --> 281 self._mapping[key] = eval(value) 282 return self._mapping[key] 283

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in ()

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in mapping_for_SequenceClassification(mapping, type) 252 } 253 elif type == "bert": --> 254 mapping.pop("lm_head") 255 mapping["classifier"] = {"name": "classifier"} 256 elif type == "deberta":

    KeyError: 'lm_head'

    This is how model is defined

    config = AutoConfig.from_pretrained( "bert-base-uncased" cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) config.dropout_rate = 0.0 tokenizer = AutoTokenizer.from_pretrained( "bert-base-uncased", cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model.resize_token_embeddings(len(tokenizer))

    opened by zuluzazu 1
  • `sequential` parameter is not used in `AdapterModel`

    `sequential` parameter is not used in `AdapterModel`

    Hi,

    Thanks for the awesome tool! I noticed that the sequential: Optional[str]=True parameter in the AdapterModel is not used, so the user can not actually insert the adapter in a parallel manner for the AdapterModel class by setting sequntial=False. I think it's a little bit confusing for the user. Maybe you can add the insert_parallel_module() function to the AdapterModel class, or just don't let the user to be able to set the sequential parameter when initializing the AdapterModel class.

    opened by alvin870203 1
  • 通过setup.py安装提示error: aiohttp 4.0.0a0 is installed but aiohttp!=4.0.0a0,!=4.0.0a1 is required by {'fsspec'}

    通过setup.py安装提示error: aiohttp 4.0.0a0 is installed but aiohttp!=4.0.0a0,!=4.0.0a1 is required by {'fsspec'}

    Installed /home/bmxm/anaconda3/envs/cpm-ant-plus/lib/python3.8/site-packages/opendelta-0.3.2-py3.8.egg Processing dependencies for opendelta==0.3.2 error: aiohttp 4.0.0a0 is installed but aiohttp!=4.0.0a0,!=4.0.0a1 is required by {'fsspec'} image

    opened by daliang0222 1
  • Example of multi-task

    Example of multi-task

    Hi,

    I saw on the documentation page there is a page for multi-task training: https://opendelta.readthedocs.io/en/latest/notes/pluginunplug.html.

    However I think it is not entirely clear how this modelling approach would work in practice?

    Is there any examples for using OpenDelta for multi-task with the training code etc?

    Thanks in advance

    Best,

    Niall

    opened by NtaylorOX 2
  • tutorial doc bug

    tutorial doc bug

    Hi, I noticed that there're some bugs exists in the BM train tutorial file, would you mind if you could modify it in the future?

    argument bug

    returns:2_with_bmtrain.py: error: unrecognized arguments: --delta_type low_rank_adapter

    delta model visualization bug

    returns:

    File "./2_with_bmtrain.py", line 132, in get_model
    od.Visualization(model).structure_graph()
    AttributeError: module 'opendelta' has no attribute 'Visualization'
    

    in order to reproduce it, I worked with open delta 0.3.2

    lowrank adapter with bert

    when using bert with lowrankadapter, returns

    AttributeError: str(forward() got an unexpected keyword argument 'output_pooler_output')
            The LowRankAdapterModel requires a dummy_inputs to be passed through the model to understand the dimensionality of each tensor in the computation graph. 
             The BertModel Class has no dummy_inputs, and automatically created dummy_inputs failed.
             Refer to `https://opendelta.readthedocs.io/en/latest/notes/faq.html` for detail.
    

    lora with bert

    Traceback (most recent call last):
      File "./2_with_bmtrain.py", line 371, in <module>
        main()
      File "./2_with_bmtrain.py", line 360, in main
        tokenizer, model, optimizer, lr_scheduler = setup_model_and_optimizer(args)
      File "./2_with_bmtrain.py", line 204, in setup_model_and_optimizer
        model = get_model(args)
      File "./2_with_bmtrain.py", line 135, in get_model
        delta_model = LoraModel(backbone_model=model, modified_modules=['project_q', 'project_k'], backend='bmt')
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/delta_models/lora.py", line 136, in __init__
        self.add_all_delta_to_backbone(self.backbone_model,
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/basemodel.py", line 213, in add_all_delta_to_backbone
        self.update_module(backbone, key)
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/delta_models/lora.py", line 143, in update_module
        parallel_module = self.new_module_like(child_module=child_ref)
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/delta_models/lora.py", line 151, in new_module_like
        in_features, out_features = child_module.in_features, child_module.out_features
      File "/root/miniconda3/lib/python3.8/site-packages/bmtrain-0.1.8-py3.8-linux-x86_64.egg/bmtrain/layer.py", line 12, in __getattr__
        ret = super().__getattr__(name)
      File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'Linear' object has no attribute 'in_features'
    

    incorrect installation commands

    [email protected]:~/OpenDelta/examples/tutorial# pip install [email protected]:OpenBMB/ModelCenter.git
    ERROR: Invalid requirement: '[email protected]:OpenBMB/ModelCenter.git'
    Hint: It looks like a path. File '[email protected]:OpenBMB/ModelCenter.git' does not exist.
    

    thanks for your contribution to the open source community; if you got some time in the feature, it would be great to update the tutorial with regards jiajun

    opened by zhujiajunbryan 1
  • Feature Request: Add Support for

    Feature Request: Add Support for "Aside Modules"

    Injected additional trainable modules to connect the unfrozen modules in parameter efficient finetuning can improve the gradient flow and significantly improve convergence speed and performance (at least when finetuning models for information retrieval) see https://arxiv.org/pdf/2208.09847.pdf.

    enhancement 
    opened by ethankim00 1
  • does opendelta support gradient_checkpointing?

    does opendelta support gradient_checkpointing?

    Thank you for the awesome work. I met some problems when using opendelta with gradient_checkpointing, it just throws: "RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn" btw code works well as gradient_checkpointing is closed.

    so does opendelta support gradient_checkpointing?

    opened by hmzo 3
Releases(v0.3.2)
Owner
THUNLP
Natural Language Processing Lab at Tsinghua University
THUNLP
SW components and demos for visual kinship recognition. An emphasis is put on the FIW dataset-- data loaders, benchmarks, results in summary.

FIW Data Development Kit Table of Contents Introduction Families In the Wild Database Publications Organization To Do License Getting Involved Introdu

Joseph P. Robinson 12 Jun 04, 2022
Classify music genre from a 10 second sound stream using a Neural Network.

MusicGenreClassification Academic research in the field of Deep Learning (Deep Neural Networks) and Sound Processing, Tel Aviv University. Featured in

Matan Lachmish 453 Dec 27, 2022
OpenIPDM is a MATLAB open-source platform that stands for infrastructures probabilistic deterioration model

Open-Source Toolbox for Infrastructures Probabilistic Deterioration Modelling OpenIPDM is a MATLAB open-source platform that stands for infrastructure

CIVML 0 Jan 20, 2022
Leveraging OpenAI's Codex to solve cornerstone problems in Music

Music-Codex Leveraging OpenAI's Codex to solve cornerstone problems in Music Please NOTE: Presented generated samples were created by OpenAI's Codex P

Alex 2 Mar 11, 2022
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

seominseok 62 Dec 08, 2022
Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.

English | 简体中文 | 繁體中文 | 한국어 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained models

Clara Meister 50 Nov 12, 2022
An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Deep Permutation Equivariant Structure from Motion Paper | Poster This repository contains an implementation for the ICCV 2021 paper Deep Permutation

72 Dec 27, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Generalized Proximal Policy Optimization with Sample Reuse (GePPO)

Generalized Proximal Policy Optimization with Sample Reuse This repository is the official implementation of the reinforcement learning algorithm Gene

Jimmy Queeney 9 Nov 28, 2022
Hcpy - Interface with Home Connect appliances in Python

Interface with Home Connect appliances in Python This is a very, very beta inter

Trammell Hudson 116 Dec 27, 2022
Bald-to-Hairy Translation Using CycleGAN

GANiry: Bald-to-Hairy Translation Using CycleGAN Official PyTorch implementation of GANiry. GANiry: Bald-to-Hairy Translation Using CycleGAN, Fidan Sa

Fidan Samet 10 Oct 27, 2022
Face Recognition & AI Based Smart Attendance Monitoring System.

In today’s generation, authentication is one of the biggest problems in our society. So, one of the most known techniques used for authentication is h

Sagar Saha 1 Jan 14, 2022
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
This is the code used in the paper "Entity Embeddings of Categorical Variables".

This is the code used in the paper "Entity Embeddings of Categorical Variables". If you want to get the original version of the code used for the Kagg

Cheng Guo 845 Nov 29, 2022
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 | 한국어 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrai

Hugging Face 77.4k Jan 05, 2023
RTSeg: Real-time Semantic Segmentation Comparative Study

Real-time Semantic Segmentation Comparative Study The repository contains the official TensorFlow code used in our papers: RTSEG: REAL-TIME SEMANTIC S

Mennatullah Siam 592 Nov 18, 2022
A deep learning model for style-specific music generation.

DeepJ: A model for style-specific music generation https://arxiv.org/abs/1801.00887 Abstract Recent advances in deep neural networks have enabled algo

Henry Mao 704 Nov 23, 2022
Dilated Convolution for Semantic Image Segmentation

Multi-Scale Context Aggregation by Dilated Convolutions Introduction Properties of dilated convolution are discussed in our ICLR 2016 conference paper

Fisher Yu 764 Dec 26, 2022
Keras implementation of Deeplab v3+ with pretrained weights

Keras implementation of Deeplabv3+ This repo is not longer maintained. I won't respond to issues but will merge PR DeepLab is a state-of-art deep lear

1.3k Dec 07, 2022