Library for 8-bit optimizers and quantization routines.

Overview

bitsandbytes

Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions.

Paper -- Video -- Docs

TL;DR

Installation:

  1. Note down version: conda list | grep cudatoolkit
  2. Replace 111 with the version that you see: pip install bitsandbytes-cuda111

Usage:

  1. Comment out optimizer: #torch.optim.Adam(....)
  2. Add 8-bit optimizer of your choice bnb.optim.Adam8bit(....) (arguments stay the same)
  3. Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

Features

  • 8-bit Optimizers: Adam, AdamW, RMSProp, LARS, LAMB (saves 75% memory)
  • Stable Embedding Layer: Improved stability through better initialization, and normalization
  • 8-bit quantization: Quantile, Linear, and Dynamic quantization
  • Fast quantile estimation: Up to 100x faster than other algorithms

Requirements & Installation

Requirements: anaconda, cudatoolkit, pytorch Hardware requirements: NVIDIA Maxwell GPU or newer (>=GTX 9XX) Supported CUDA versions: 9.2 - 11.3

The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the "Get Started" instructions on the official website.

bitsandbytes is compatible with all major PyTorch releases and cudatoolkit versions, but for now, you need to select the right version manually. To do this run:

conda list | grep cudatoolkit

and take note of the Cuda version that you have installed. Then you can install bitsandbytes via:

# choices: {cuda92, cuda 100, cuda101, cuda102, cuda110, cuda111, cuda113}
# replace XXX with the respective number
pip install bitsandbytes-cudaXXX

To check if your installation was successful, you can execute the following command, which runs a single bnb Adam update.

wget https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

Using bitsandbytes

Using the 8-bit Optimizers

With bitsandbytes 8-bit optimizers can be used by changing a single line of code in your codebase. For NLP models we recommend also to use the StableEmbedding layers (see below) which improves results and helps with stable 8-bit optimization. To get started with 8-bit optimizers, it is sufficient to replace your old optimizer with the 8-bit optimizer in the following way:

import bitsandbytes as bnb

# adam = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # comment out old optimizer
adam = bnb.optim.Adam8bit(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # add bnb optimizer
adam = bnb.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995), optim_bits=8) # equivalent


torch.nn.Embedding(...) ->  bnb.nn.StableEmbedding(...) # recommended for NLP models

Note that by default all parameter tensors with less than 4096 elements are kept at 32-bit even if you initialize those parameters with 8-bit optimizers. This is done since such small tensors do not save much memory and often contain highly variable parameters (biases) or parameters that require high precision (batch norm, layer norm). You can change this behavior like so:

# parameter tensors with less than 16384 values are optimized in 32-bit
# it is recommended to use multiplies of 4096
adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384) 

Change Bits and other Hyperparameters for Individual Parameters

If you want to optimize some unstable parameters with 32-bit Adam and others with 8-bit Adam, you can use the GlobalOptimManager. With this, we can also configure specific hyperparameters for particular layers, such as embedding layers. To do that, we need two things: (1) register the parameter while they are still on the CPU, (2) override the config with the new desired hyperparameters (anytime, anywhere). See our guide for more details

Fairseq Users

To use the Stable Embedding Layer, override the respective build_embedding(...) function of your model. Make sure to also use the --no-scale-embedding flag to disable scaling of the word embedding layer (nor replaced with layer norm). You can use the optimizers by replacing the optimizer in the respective file (adam.py etc.).

Release and Feature History

For upcoming features and changes and full history see Patch Notes.

Errors

  1. RuntimeError: CUDA error: no kernel image is available for execution on the device. Solution

License

The majority of bitsandbytes is licensed under MIT, however portions of the project are available under separate license terms: Pytorch is licensed under the BSD license.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.

Citation

If you found this library and 8-bit optimizers or quantization routines useful, please consider citing out work.

@misc{dettmers2021optim8bit,
      title={8-bit Optimizers via Block-wise Quantization},
      author={Tim Dettmers and Mike Lewis and Sam Shleifer and Luke Zettlemoyer},
      year={2021},
      eprint={2110.02861},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • python setup.py install error

    python setup.py install error

    (bitsandbytes) [email protected]:~/disk1/github/bitsandbytes$ python setup.py install Traceback (most recent call last): File "setup.py", line 15, in name = f"bitsandbytes-cuda{os.environ['CUDA_VERSION']}", File "/home/chenxin/disk1/anaconda3/envs/bitsandbytes/lib/python3.8/os.py", line 675, in getitem raise KeyError(key) from None KeyError: 'CUDA_VERSION' (bitsandbytes) [email protected]:~/disk1/github/bitsandbytes$ conda list | grep cudatoolkit cudatoolkit 11.1.1 h6406543_8 conda-forge

    documentation enhancement 
    opened by mathpopo 10
  • Did you ever try MNMT systems?

    Did you ever try MNMT systems?

    As reported in the paper, for training a bi-directional transformer model on WMT14 or WMT16 the performance of 8-bit Adam stays relatively consistent with the 32-bit counterparts. I was also able to verify this on other data sources for training bi-directional models with my own setup.

    However, I've also tried multiple variations of 8-bit optimizers on multilingual neural machine translation (MNMT) models in fairseq and there it seems that even with --no-scale-embedding as well as the StableEmbedding the performance is roughly 3 BLEU behind the counterparts. The --no-scale-embedding flag amounts to roughly 7 BLEU gain, while the xavier init amounts to roughly 0.4 BLEU gain. Didn't look into the effect of the layer norm of the stable embeddings yet.

    Did you do any testing on that and have practical tips on getting the performance up?

    bug question 
    opened by SirRob1997 9
  • undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    (torch1.8-py3.8) [email protected]:/home/share/jiaofangkai$ python check_bnb_install.py
    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/adam.py", line 6, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 459, in LoadLibrary
        return self._dlltype(name)
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 381, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Hi, I have encountered similar questions to #5 . I have tested with TeslaT4 and RTX 2080Ti but both failed.

    The environment are as follows:

    # TeslaT4
    Ubuntu 18.04.6, Tesla T4, cuda-10.1, driver vesion: 418.197.02, python=3.8, torch=1.8.1+cu101
    
    # RTX 2080Ti
    Ubuntu 20.04.3, RTX 2080Ti, cuda-10.1, driver version: 435.21, python=3.8, torch=1.8.1+cu101
    
    bug 
    opened by SparkJiao 6
  • Support for Tesla Architecture

    Support for Tesla Architecture

    First of all, great work!

    Secondly, I can see that you specify that Maxwell Architecture is necessary, and I am wondering if

    1. it's possible to do 8-bit optimization on Tesla Architecture
    2. there are plans to implement it

    I ask because Kaggle and Colab notebooks use Tesla Architectures (P100, K80), and I'm sure those communities, myself included, would be interested in using bitsandbytes

    enhancement 
    opened by nbroad1881 5
  • import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 出现如下 OSError: /home/anaconda3/envs/ner/lib/python3.6/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    您好,请问这该怎么解决啊

    opened by zhishui3 4
  • no difference in memory usage

    no difference in memory usage

    Hi. I am training my network with bnb.optim.Adam8bit vs torch.optim.Adam but I don't see any difference in memory consumption.

    Running on GTX 2080Ti (single gpu or DDP). with cudatoolkit 11.1.74 bitsandbytes-cuda111

    looking in nvidia-smi I see 9.6GB in both cases Am I missing something here?

    opened by ofrimasad 3
  • errors when training to the third epoch. everytime.

    errors when training to the third epoch. everytime.

    THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=29 error=1 : invalid argument
    Traceback (most recent call last):
      File "train_pointunet.py", line 211, in <module>
        loss_seg = lossfunc_seg(outputs_seg, labels)+lossfunc_dice(outputs_seg,labels)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: cuda runtime error (1) : invalid argument at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29
    

    im very confused because in the first several epoches it works fine.

    opened by Dootmaan 2
  • [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

    Does it suppose user creation of custom classes to replace (for example) huggingface transformers' GPT2DoubleHeadsModel? Or there is something like bnb.optim.GlobalOptimManager which change provided model instance to use bitsandbytes embeddings instead of torch ones?

    enhancement question 
    opened by LSinev 2
  • The code uses more GPU memory with Multi-scale Vision Transformers

    The code uses more GPU memory with Multi-scale Vision Transformers

    Hi,

    Thanks for the great work! I'm currently trying to apply your code to vision transformers, specifically, on this code base: https://github.com/facebookresearch/SlowFast/tree/main/projects/mvit When using torch.optim.SGD(momentum=0.9), the code consumes 9221MiB GPU memory during training. After changing it to use bnb.optim.SGD8bit() with the same arguments, it consumes even a bit more GPU memory of 9235MiB. Do you have any idea why this would happen? Thank you! My CUDA version is 10.2 and torch version is 1.9.1.

    Best, Junwei

    question 
    opened by JunweiLiang 2
  • bnb.optim.AdamW

    bnb.optim.AdamW

    Hey @TimDettmers,

    Awesome library! bnb.optim.Adam saved me from having to use model parallelism :heart_eyes:

    Do you think it would be easy to also add a bnb.optim.AdamW version for https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW ?

    Happy to give it a try if you think it's easily feasible :-)

    enhancement 
    opened by patrickvonplaten 2
  • undefined symbol: __fatbinwrap_38

    undefined symbol: __fatbinwrap_38

    With some CUDA versions and on some architectures this error occurs:

    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/adam.py", line 5, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary
        return self._dlltype(name)
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Confirmed for CUDA 10.1 for compute capability 7.5 (V100).

    bug 
    opened by TimDettmers 2
  • 'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I am trying to train GPT-J with 8bit weights. It's working well on GPU. But When I try to use it on CPU, it gives this error

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I have used dequantize_blockwise from bitsandbytes.functional. Following is the class in which its used:

    class DequantizeAndLinear(torch.autograd.Function):
    
        def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
                    absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            ctx.save_for_backward(input, weights_quantized, absmax, code)
            ctx._has_bias = bias is not None
            return F.linear(input, weights_deq, bias)
    
        def backward(ctx, grad_output: torch.Tensor):
            assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
            input, weights_quantized, absmax, code = ctx.saved_tensors
            # grad_output: [*batch, out_features]
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            grad_input = grad_output @ weights_deq
            grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
            return grad_input, None, None, None, grad_bias
    
    

    Is it possible to run it on CPUor should I have to run it only GPU ?

    opened by HumzaSami00 0
  • Adding Code of Conduct file

    Adding Code of Conduct file

    This is pull request was created automatically because we noticed your project was missing a Code of Conduct file.

    Code of Conduct files facilitate respectful and constructive communities by establishing expected behaviors for project contributors.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • Adding Contributing file

    Adding Contributing file

    This is pull request was created automatically because we noticed your project was missing a Contributing file.

    CONTRIBUTING files explain how a developer can contribute to the project - which you should actively encourage.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • 8-bit optimizer crashes when fine-tuning gpt2-large

    8-bit optimizer crashes when fine-tuning gpt2-large

    Using the bnb.optim.Adam8bit optimizer in place of torch.optim.Adam causes a crash after a handful of batches:

    12it [00:22, 1.82s/it]Error an illegal memory access was encountered at line 198 in file /home/alyssa/gpt_math/bitsandbytes/csrc/ops.cu

    I am fine-tuning Huggingface's version of the gpt2-large model on an Ampere 3090 GPU with CUDA version 11.6 and nVidia driver version 510.73.05. I have tried compiling bitsandbytes on my machine from source, and the set_optim_to_run_embedding_in_fp32 trick from https://github.com/huggingface/transformers/issues/14819; neither of them affected the behavior. Running with the standard pytorch Adam optimizer works fine. nvidia-smi shows 16 GB of memory used on a GPU with 24 GB, so it shouldn't be running out of RAM or anywhere close to that.

    opened by rationalism 0
  • bfloat16 grads are not supported

    bfloat16 grads are not supported

    Is there any plans to support models/grads with bfloat16 type? Bfloat gained quite the popularity lately as every ampere GPU supports the type, and eliminates the need for loss scaling compared to float16. This is what I get when I try to initialize bnb.AdamW with a bfloat16 casted model: ValueError: Gradient+optimizer bit data type combination not supported: grad torch.bfloat16, optimizer torch.uint8

    opened by kurumuz 0
  • Check dtype of input tensors is correct

    Check dtype of input tensors is correct

    If a 16-bit float tensor on the CPU was passed as the input to quantize_blockwise or the output buffer for dequantize_blockwise, the code was previously passing its address to the c[de]quantize_blockwise_cpu_fp32 method, silently casting it to a 32-bit float* and resulting in segfaults.

    A similar issue occurs if the absmax/code arguments to dequantize_blockwise are (somehow) 16-bit, resulting in illegal memory accesses on the GPU.

    It took me a little while to track down the causes because of the cryptic errors; so I figured it was worth suggesting these changes. I've only been using the blockwise methods, so it's possible there are similar issues in other parts of the code - might be worth checking :)

    This PR also includes a couple unrelated typo fixes.

    Thanks for your work on this library, it's nice to squeeze the most I can out of my paltry GPU memory :)

    CLA Signed 
    opened by acarapetis 3
Releases(0.26.0)
  • 0.26.0(Nov 29, 2021)

    This release has important bug fixes for the StableEmbedding layer and it introduces new optimizers AdaGrad, and AdamW. The 0.26.0 release also features a new, lightweight embedding class, bnb.nn.Embedding which uses 32-bit optimizers but no layer norm. This layer allows for the easy use of pretrained models that do not use embedding layer norms. Now available on pip.

    Changelog

    Features:

    • Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer.
    • Added AdamW (copy of Adam with weight decay init 1e-2). #10
    • Introduced ModuleConfig overrides which can be seamlessly be used at initialization time of a module.
    • Added bnb.nn.Embedding layer which runs at 32-bit but without the layernorm. This works well if you need to fine-tune pretrained models that do not have a embedding layer norm. #19

    Bug fixes:

    • Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
    • Fixed an unsafe use of eval. #8
    • Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())). #13 #15

    Docs:

    • Added instructions how to solve "__fatbinwrap_" errors.
    Source code(tar.gz)
    Source code(zip)
  • 0.25.0(Oct 22, 2021)

    This release offers full support for all GPUs for Kepler (GTX 600) or better. It introduces skip_zeros which ensures correct optimizer updates for sparse gradients. While some pieces are still missing, some features for 8-bit optimizer research are added. AnalysisAdam allows the tracking and analysis of 8-bit vs 32-bit Adam quantization errors.

    Features:

    • Added skip_zeros for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
    • Added support for Kepler GPUs. (#4)
    • Added Analysis Adam to track 8-bit vs 32-bit quantization errors over time.
    • Make compilation more user-friendly.

    Bug fixes:

    • fixed "undefined symbol: __fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)

    Docs:

    • Added docs with instructions to compile from source.
    Source code(tar.gz)
    Source code(zip)
Owner
Facebook Research
Facebook Research
Adjust Decision Boundary for Class Imbalanced Learning

Adjusting Decision Boundary for Class Imbalanced Learning This repository is the official PyTorch implementation of WVN-RS, introduced in Adjusting De

Peyton Byungju Kim 16 Jan 04, 2023
Решения, подсказки, тесты и утилиты для тренировки по алгоритмам от Яндекса.

Решения и подсказки к тренировке по алгоритмам от Яндекса Что есть внутри Решения с подсказками и комментариями; рекомендую сначала смотреть md файл п

Yankovsky Andrey 50 Dec 26, 2022
Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System

Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System The possibilities to involve

Babu Kumaran Nalini 0 Nov 19, 2021
Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification

Fine-grainedImageClassification Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification We trained model here: lin

ZhenchaoTang 14 Oct 21, 2022
Session-based Recommendation, CoHHN, price preferences, interest preferences, Heterogeneous Hypergraph, Co-guided Learning, SIGIR2022

This is our implementation for the paper: Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation Xiaokun Zhang, Bo

Xiaokun Zhang 27 Dec 02, 2022
A simple implementation of Kalman filter in single object tracking

kalman-filter-in-single-object-tracking A simple implementation of Kalman filter in single object tracking https://www.bilibili.com/video/BV1Qf4y1J7D4

130 Dec 26, 2022
Simple SN-GAN to generate CryptoPunks

CryptoPunks GAN Simple SN-GAN to generate CryptoPunks. Neural network architecture and training code has been modified from the PyTorch DCGAN example.

Teddy Koker 66 Dec 15, 2022
City-seeds - A random generator of cultural characteristics intended to spark ideas and help draw threads

City Seeds This is a random generator of cultural characteristics intended to sp

Aydin O'Leary 2 Mar 12, 2022
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
Tutoriais publicados nas nossas redes sociais para obtenção de dados, análises simples e outras tarefas relevantes no mercado financeiro.

Tutoriais Públicos Tutoriais publicados nas nossas redes sociais para obtenção de dados, análises simples e outras tarefas relevantes no mercado finan

Trading com Dados 68 Oct 15, 2022
Official PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection? (ICCV 2021), Dennis Park*, Rares Ambrus*, Vitor Guizilini, Jie Li, and Adrien Gaidon.

DD3D: "Is Pseudo-Lidar needed for Monocular 3D Object detection?" Install // Datasets // Experiments // Models // License // Reference Full video Offi

Toyota Research Institute - Machine Learning 364 Dec 27, 2022
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Whisper is a file-based time-series database format for Graphite.

Whisper Overview Whisper is one of three components within the Graphite project: Graphite-Web, a Django-based web application that renders graphs and

Graphite Project 1.2k Dec 25, 2022
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 02, 2023
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 03, 2023
YoHa - A practical hand tracking engine.

YoHa - A practical hand tracking engine.

2k Jan 06, 2023
FlingBot: The Unreasonable Effectiveness of Dynamic Manipulations for Cloth Unfolding

This repository contains code for training and evaluating FlingBot in both simulation and real-world settings on a dual-UR5 robot arm setup for Ubuntu 18.04

Columbia Artificial Intelligence and Robotics Lab 70 Dec 06, 2022
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
Calibrate your listeners! Robust communication-based training for pragmatic speakers. Findings of EMNLP 2021.

Calibrate your listeners! Robust communication-based training for pragmatic speakers Rose E. Wang, Julia White, Jesse Mu, Noah D. Goodman Findings of

Rose E. Wang 3 Apr 02, 2022