Prevent `CUDA error: out of memory` in just 1 line of code.

Overview

๐Ÿจ Koila

Koila solves CUDA error: out of memory error painlessly. Fix it with just one line of code, and forget it.

Type Checking Formatting Unit testing License: MIT Tweet

Koila

๐Ÿš€ Features

  • ๐Ÿ™… Prevents CUDA error: out of memory error with one single line of code.

  • ๐Ÿฆฅ Lazily evaluates pytorch code to save computing power.

  • โœ‚๏ธ Automatically splits along the batch dimension to more GPU friendly numbers (2's powers) to speed up the execution.

  • ๐Ÿค Minimal API (wrapping all inputs will be enough).

๐Ÿค” Why Koila?

Ever encountered RuntimeError: CUDA error: out of memory? We all love PyTorch because of its speed, efficiency, and transparency, but that means it doesn't do extra things. Things like preventing a very common error that has been bothering many users since 2017.

This library aims to prevent that by being a light-weight wrapper over native PyTorch. When a tensor is wrapped, the library automatically computes the amount of remaining GPU memory and uses the right batch size, saving everyone from having to manually finetune the batch size whenever a model is used.

Also, the library automatically uses the right batch size to GPU. Did you know that using bigger batches doesn't always speed up processing? It's handled automatically in this library too.

Because Koila code is PyTorch code, as it runs PyTorch under the hood, you can use both together without worrying compatibility.

Oh, and all that in 1 line of code! ๐Ÿ˜Š

โฌ‡๏ธ Installation

Koila is available on PyPI. To install, run the following command.

pip install koila

๐Ÿƒ Getting started

The usage is dead simple. For example, you have the following PyTorch code (copied from PyTorch's tutorial)

Define the input, label, and model:

# A batch of MNIST image
input = torch.randn(8, 28, 28)

# A batch of labels
label = torch.randn(0, 10, [8])

class NeuralNetwork(Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = Flatten()
        self.linear_relu_stack = Sequential(
            Linear(28 * 28, 512),
            ReLU(),
            Linear(512, 512),
            ReLU(),
            Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

Define the loss function, calculate output and losses.

loss_fn = CrossEntropyLoss()

# Calculate losses
out = nn(t)
loss = loss_fn(out, label)

# Backward pass
nn.zero_grad()
loss.backward()

Ok. How to adapt the code to use Koila's features?

You change this line of code:

# Wrap the input tensor.
# If a batch argument is provided, that dimension of the tensor would be treated as the batch.
# In this case, the first dimension (dim=0) is used as batch's dimension.
input = lazy(torch.randn(8, 28, 28), batch=0)

Done. You will not run out of memory again.

See examples/getting-started.py for the full example.

๐Ÿ‹๏ธ How does it work under the hood?

CUDA error: out of memory generally happens in forward pass, because temporary variables will need to be saved in memory.

Koila is a thin wrapper around PyTorch. It is inspired by TensorFlow's static/lazy evaluation. By building the graph first, and run the model only when necessarily, the model has access to all the information necessarily to determine how much resources is really need to compute the model.

In terms of memory usage, only shapes of temporary variables are required to calculate the memory usage of those variables used in the model. For example, + takes in two tensors with equal sizes, and outputs a tensor with a size equal to the input size, and log takes in one tensor, and outputs another tensor with the same shape. Broadcasting makes it a little more complicated than that, but the general ideas are the same. By tracking all these shapes, one could easily tell how much memory is used in a forward pass. And select the optimal batch size accordingly.

๐ŸŒ It sounds slow. Is it?

NO. Indeed, calculating shapes and computing the size and memory usage sound like a lot of work. However, keep in mind that even a gigantic model like GPT-3, which has 96 layers, has only a few hundred nodes in its computing graph. Because Koila's algorithms run in linear time, any modern computer will be able to handle a graph like this instantly.

Most of the computing is spent on computing individual tensors, and transferring tensors across devices. And bear in mind that those checks happen in vanilla PyTorch anyways. So no, not slow at all.

๐Ÿ”Š How to pronounce koila?

This project was originally named koala, the laziest species in the world, and this project is about lazy evaluation of tensors. However, as that name is taken on PyPI, I had no choice but to use another name. Koila is a word made up by me, pronounced similarly to voila (It's a French word), so sounds like koala.

โญ Give me a star!

If you like what you see, please consider giving this a star (โ˜…)!

๐Ÿ—๏ธ Why did I build this?

Batch size search is not new. In fact, the mighty popular PyTorch Lightning has it. So why did I go through the trouble and build this project?

PyTorch Lightning's batch size search is deeply integrated in its own ecosystem. You have to use its DataLoader, subclass from their models, and train your models accordingly. While it works well with supervised learning tasks, it's really painful to use in a reinforcement learning task, where interacting with the environment is a must.

In comparison, because Koila is a super lightweight PyTorch wrapper, it works when PyTorch works, thus providing maximum flexibility and minimal changes to existing code.

๐Ÿ“ Todos

  • ๐Ÿงฉ Provide an extensible API to write custom functions for the users.
  • ๐Ÿ˜Œ Simplify internal workings even further. (Especially interaction between Tensors and LazyTensors).
  • ๐Ÿช Work with multiple GPUs.

๐Ÿšง Warning

The code works on many cases, but it's still a work in progress. This is not (yet) a fully PyTorch compatible library due to limited time.

๐Ÿฅฐ Contributing

We take openness and inclusiveness very seriously. We have adopted the following Code of Conduct.

This repository is related to an Arabic tutorial, within the tutorial we discuss the common data structure and algorithms and their worst and best case for each, then implement the code using Python.

Data Structure and Algorithms with Python This repository is related to the Arabic tutorial here, within the tutorial we discuss the common data struc

Mohamed Ayman 33 Dec 02, 2022
Efficient training of deep recommenders on cloud.

HybridBackend Introduction HybridBackend is a training framework for deep recommenders which bridges the gap between evolving cloud infrastructure and

Alibaba 111 Dec 23, 2022
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection This repository contains an implementation of FCAF3D, a 3D object detection method introdu

SamsungLabs 153 Dec 29, 2022
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Source code for "OmniPhotos: Casual 360ยฐ VR Photography"

OmniPhotos: Casual 360ยฐ VR Photography Project Page | Video | Paper | Demo | Data This repository contains the source code for creating and viewing Om

Christian Richardt 144 Dec 30, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
unofficial pytorch implementation of RefineGAN

RefineGAN unofficial pytorch implementation of RefineGAN (https://arxiv.org/abs/1709.00753) for CSMRI reconstruction, the official code using tensorpa

xinby17 5 Jul 21, 2022
Fast Style Transfer in TensorFlow

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! You can even style videos! It takes 100ms o

Jefferson 5 Oct 24, 2021
Leveraging OpenAI's Codex to solve cornerstone problems in Music

Music-Codex Leveraging OpenAI's Codex to solve cornerstone problems in Music Please NOTE: Presented generated samples were created by OpenAI's Codex P

Alex 2 Mar 11, 2022
Flexible-Modal Face Anti-Spoofing: A Benchmark

Flexible-Modal FAS This is the official repository of "Flexible-Modal Face Anti-

Zitong Yu 22 Nov 10, 2022
Identifying a Training-Set Attackโ€™s Target Using Renormalized Influence Estimation

Identifying a Training-Set Attackโ€™s Target Using Renormalized Influence Estimation By: Zayd Hammoudeh and Daniel Lowd Paper: Arxiv Preprint Coming soo

Zayd Hammoudeh 2 Oct 08, 2022
Code for Universal Semi-Supervised Semantic Segmentation models paper accepted in ICCV 2019

USSS_ICCV19 Code for Universal Semi Supervised Semantic Segmentation accepted to ICCV 2019. Full Paper available at https://arxiv.org/abs/1811.10323.

Tarun K 68 Nov 24, 2022
Preparation material for Dropbox interviews

Dropbox-Onsite-Interviews A guide for the Dropbox onsite interview! The Dropbox interview question bank is very small. The bank has been in a Chinese

386 Dec 31, 2022
[ACM MM 2021] TSA-Net: Tube Self-Attention Network for Action Quality Assessment

Tube Self-Attention Network (TSA-Net) This repository contains the PyTorch implementation for paper TSA-Net: Tube Self-Attention Network for Action Qu

ShunliWang 18 Dec 23, 2022
Artificial intelligence technology inferring issues and logically supporting facts from raw text

๊ฐœ์š” ๋น„์ •ํ˜• ํ…์ŠคํŠธ๋ฅผ ํ•™์Šตํ•˜์—ฌ ์Ÿ์ ๋ณ„ ์‚ฌ์‹ค๊ณผ ๋…ผ๋ฆฌ์  ๊ทผ๊ฑฐ ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•œ ์ธ๊ณต์ง€๋Šฅ ์›์ฒœ๊ธฐ์ˆ  Artificial intelligence techno

6 Dec 29, 2021
An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym

gym-idsgame An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym gym-idsgame is a reinforcement learning environment for simulating at

Kim Hammar 29 Dec 03, 2022
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

77 Jan 05, 2023
CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network)

CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network) This is PneumoniaDiagnose, an artificially intellig

Azhaan 2 Jan 03, 2022
Melanoma Skin Cancer Detection using Convolutional Neural Networks and Transfer Learning๐Ÿ•ต๐Ÿปโ€โ™‚๏ธ

This is a Kaggle competition in which we have to identify if the given lesion image is malignant or not for Melanoma which is a type of skin cancer.

Vipul Shinde 1 Jan 27, 2022