A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Overview

NimTorch

Master Release
Build Status Build Status

Pytorch - Py + Nim

A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Because Nim compiles to C++, this is not a wrapper or binding library. It generates 1-to-1 native ATen code.

The only requirement from pytorch is ATen's core tensor library. Because of this, nimtorch is extremely versatile and can compile on any kind of device.

Current status

Early stage

  • Automatically generated, from Declarations.yaml, the full ATen API
  • Cuda support ( add -d:cuda when compiling with nim )
  • WASM support ( add -d:wasm when compiling with nim )
  • Automatically generated, from derivatives.yaml, gradient procs
  • Autograd
  • Add missing derivatives
  • More high level pytorch API (Module, Models etc)
  • ...

The final aim is to be as compatible as possible with the pytorch API.

Why

Ease of use of the python language while keeping fully bare metal native C++ performance

Python code

# GRUCell
gi = x.matmul(w_input.t()) + b_input
gh = hidden.matmul(w_recur.t()) + b_recur
i_r, i_i, i_n = gi.chunk(3, 1)
h_r, h_i, h_n = gh.chunk(3, 1)
resetgate = (i_r + h_r).sigmoid()
inputgate = torch.sigmoid(i_i + h_i)
newgate = (i_n + resetgate * h_n).tanh()
hy = newgate + inputgate * (hidden - newgate)

Nim code

# GRUCell
let
  gi = x.matmul(w_input.t()) + b_input
  gh = hidden.matmul(w_recur.t()) + b_recur
  (i_r, i_i, i_nn) = gi.chunk(3, 1)
  (h_r, h_i, h_n)  = gh.chunk(3, 1)
  resetgate = (i_r + h_r).sigmoid()
  inputgate = torch.sigmoid(i_i + h_i)
  newgate = (i_nn + resetgate * h_n).tanh()
  hy = newgate + inputgate * (hidden - newgate)

Getting started

Requirements

Linux: A recent distribution on par with ubuntu 18.04 in terms of libc and basic libraries, gcc compiler

macOS: We compile with 10.13 min version flags but might work even on lower versions, XCode for the compilers

Windows: Windows 10, Visual Studio Runtime 2017 and Visual Studio 2017 (any edition)

WASM: Latest Emscripten compiler and tools

Super easy, using conda

Linux, macOS and Windows

conda create -n nimtorch -c fragcolor nimtorch (add cuda10.0 for cuda 10 linux only or add wasm for wasm version)

source activate nimtorch or on windows: conda activate nimtorch

This will install: nim and ATen binaries, fragments and nimtorch all in one command, nothing else needed.

Make sure you use a recent version of conda and have a compiler installed in your system, on windows you have to add --cc:vcc and be on a developer prompt.

Make sure your system is recent (ubuntu 18.04 reference / macOS High Sierra / Windows 10) and you have cuda 9.2 installed (if you need cuda, linux only, more cuda versions coming, please open a issue if you need a specific version).

Test with with something like:

nim cpp -o:test -r $ATEN/dist/pkgs/nimtorch-\#head/tests/test_xor.nim

or on windows... (because dlls need to be side by side)

nim cpp -o:%ATEN%/lib/test.exe -r %ATEN%/dist/pkgs/nimtorch-#head/tests/test_xor.nim

Semi manual way

Linux, macOS and Windows

Check what version of ATen/PyTorch we need in conda/nimtorch/meta.yaml - should be something like aten ==2018.10.10.1089

Note the version as you will need it in the next step

conda create -n aten -c fragcolor aten={version}

or

WASM

conda create -n aten -c fragcolor aten={version} wasm

or Cuda 10.0 (linux only)

conda create -n aten -c fragcolor aten={version} cuda10.0

activate aten environment

source activate aten or on windows: conda activate aten

  1. Make sure you have a recent Nim and Nimble version in your path
  1. clone the release branch git clone -b release https://github.com/fragcolor-xyz/nimtorch.git
  2. cd nimtorch
  3. nimble develop

finally

run self test nim cpp -o:test -r torch.nim (use -o:%ATEN%/lib/test.exe instead on windows because of dll location)

in the case of WASM:

run self test nim cpp -d:wasm -o:test.js torch.nim && node test.js (needs node.js)

Manual way without requiring conda

Build ATEN

pip2 install pyyaml typing
git clone -b fragcolor-devel https://github.com/fragcolor-xyz/pytorch.git
cd pytorch
git reset --hard <commit hash> # from torch/commit.txt
git submodule update --init
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DUSE_CUDA=OFF -DBUILD_ATEN_ONLY=ON -DCMAKE_INSTALL_PREFIX=`pwd`/output ../
make -j4
make install

# also copy derivatives if we want to run generator.nim or nimble test
# notice generator.nim might need python3 and pyyaml
cp ../tools/autograd/derivatives.yaml `pwd`/output/share/

Test the build

cd 
   
    
ATEN=
    
      nim cpp -r -f -o:/tmp/z01 torch.nim # for eg: ATEN=pathto/pytorch/build/output/

    
   

Notes

  • We suggest setting OMP_WAIT_POLICY environment variable to PASSIVE when running on CPU.
Comments
  • OSX: `nim cpp -r torch.nim` fails

    OSX: `nim cpp -r torch.nim` fails

    after building ATEN via https://github.com/fragcolor-xyz/nimtorch/issues/5#issuecomment-427937945:

    ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/lib nim cpp -r torch.nim
    error: unknown type name 'constexpr'
    
    ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output nim cpp -r --passC:-std=c++11 torch.nim
    /Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:206:14: error: no matching constructor for initialization of 'at::IntList' (aka 'ArrayRef<long long>')
            at::IntList temp(((long*) ((&self[((NI) 0)]))), selfLen_0);
                        ^    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/include/ATen/core/ArrayRef.h:67:13: note: candidate constructor not viable: no known conversion from 'long *' to 'const long long *' for 1st argument
      constexpr ArrayRef(const T* data, size_t length)
    
    
    question macOS 
    opened by timotheecour 11
  • OSX: building ATEN via `docker build -t docker_aten_native .` fails

    OSX: building ATEN via `docker build -t docker_aten_native .` fails

    Download ATen binaries or build it (instructions under)

    => no OSX option

    ATen build instructions

    cd docker && cd docker-aten-native

    is that a typo?

    is my best bet to try to follow instructions from https://github.com/pytorch/pytorch or https://github.com/pytorch/pytorch/tree/master/aten ?

    enhancement macOS 
    opened by timotheecour 11
  • SIGSEGV when running examples

    SIGSEGV when running examples

    Using the latest nimtorch, I'm getting a SIGSEGV when trying to compile one of the examples (test_xor).

    I'm not sure if this is a compiler bug or a problem with Aten?

    Dockerfile:

    FROM continuumio/miniconda
    
    RUN conda install -y -c fragcolor nimtorch
    
    ADD test.nim .
    
    docker build -t nimtorch_conda .
    docker run --rm nimtorch_conda nim c test.nim
    #Hint: used config file '/opt/conda/config/nim.cfg' [Conf]
    #Hint: used config file '/opt/conda/config/config.nims' [Conf]
    #Hint: system [Processing]
    #Hint: widestrs [Processing]
    #Hint: io [Processing]
    #Hint: test [Processing]
    #Hint: torch [Processing]
    #Hint: macros [Processing]
    #Hint: cpp [Processing]
    #Hint: nimline [Processing]
    #Hint: tables [Processing]
    #Hint: hashes [Processing]
    #Hint: strutils [Processing]
    #Hint: parseutils [Processing]
    #Hint: math [Processing]
    #Hint: bitops [Processing]
    #Hint: algorithm [Processing]
    #Hint: unicode [Processing]
    #Hint: os [Processing]
    #Hint: pathnorm [Processing]
    #Hint: osseps [Processing]
    #Hint: posix [Processing]
    #Hint: times [Processing]
    #Hint: options [Processing]
    #Hint: typetraits [Processing]
    #Hint: torch_cpp [Processing]
    #Hint: tensors [Processing]
    #Hint: sequtils [Processing]
    #Hint: sets [Processing]
    #Hint: strformat [Processing]
    #Hint: tensor_ops [Processing]
    #Hint: autograd_macro [Processing]
    #Hint: autograd_backward [Processing]
    #Hint: nn [Processing]
    #Hint: modules [Processing]
    #Hint: init [Processing]
    #Hint: python_helpers [Processing]
    #Hint: functional [Processing]
    #SIGSEGV: Illegal storage access. (Attempt to read from nil?)
    

    test.nim:

    import torch
    import torch/[nn, optim]
    
    let inputs = torch.tensor([
      [0.0, 0.0],
      [0.0, 1.0],
      [1.0, 0.0],
      [1.0, 1.0],
    ])
    
    let targets = torch.tensor([
      [0.0],
      [1.0],
      [1.0],
      [0.0],
    ])
    
    let
      fc1 = nn.Linear(2, 4)
      fc2 = nn.Linear(4, 1)
      loss_fn = nn.MSELoss()
      optimizer = optim.SGD(fc1.parameters & fc2.parameters , lr = 0.01, momentum = 0.1)
    
    set_num_threads(1)
    
    when defined gperftools:
      discard ProfilerStart("test_xor.log")
    
    for i in 0 ..< 50000:
      optimizer.zero_grad()
    
      let predictions = inputs.fc1.relu.fc2.sigmoid
    
      let loss = loss_fn(predictions, targets)
      loss.backward()
      optimizer.step()
    
      if i mod 5000 == 0:
        print(loss)
    
    when defined gperftools:
      ProfilerStop()
    
    opened by singularperturbation 4
  • can't install with nimble

    can't install with nimble

    $ nim --version Nim Compiler Version 0.18.1 [Windows: amd64] Compiled at 2018-08-18 Copyright (c) 2006-2018 by Andreas Rumpf

    git hash: b5171f57ef00bffb12387d7daf3487c5e07645f9 active boot switches: -d:release

    $ nimble install nimtorch Prompt: nimtorch not found in any local packages.json, check internet for updated packages? [y/N] y Answer: Downloading Official package list Success Package list downloaded. Tip: 3 messages have been suppressed, use --verbose to show them. Error: Package not found.

    opened by retsyo 3
  • Cannot compile nimtorch tests on Windows 10

    Cannot compile nimtorch tests on Windows 10

    C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\ATen_cpu.lib
    C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\cpuinfo.lib
    collect2.exe: error: ld returned 1 exit status
    Error: execution of an external program failed: 'g++.exe   -o C:\Users\vsagar200\work\soft\nimtorch\tests\test_xor.exe  C:\Users\vsagar200\nimcache\test_xor_d\torch_test_xor.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_system.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_torch.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\fragments_cpp.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_macros.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_tables.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_hashes.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_math.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_strutils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_parseutils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_bitops.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_algorithm.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_unicode.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_ospaths.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_winlean.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_dynlib.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_torch_cpp.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_os.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_times.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_options.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_typetraits.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_strformat.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_tensors.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_sequtils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_sets.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_tensor_ops.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_autograd_macro.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_autograd_backward.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_nn.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_init.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_python_helpers.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_optim.cpp.o  -LC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib -LC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib64 -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\ATen_cpu.lib -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\cpuinfo.lib '
    

    Although the ATEN path is correctly set. Tried on nim 0.18.1 and 0.19.0 the .lib files present at the same location the error is saying it cannot find.

    question 
    opened by eshitasagar 2
  • install Aten with nimtorch

    install Aten with nimtorch

    thanks for the library. are there any plans to have Aten distributed with nim-torch? This will make it easier to build command-line tools that depend on it.

    opened by brentp 1
  • [TODO] [offtopic] discussion regarding nimble limitation (from comments in #6)

    [TODO] [offtopic] discussion regarding nimble limitation (from comments in #6)

    moved to here the discussion started here https://github.com/fragcolor-xyz/nimtorch/issues/6#issuecomment-430511494 to keep each topic separate

    @sinkingsugar

    Nimble has a major flaw, which is it applies all the packages it has on every project you have by default. That's my major concern that easy can create a mess if not considered.

    I will give you an example exactly with nimtorch: Have nimtorch installed as nimble package Also work on the repository Did not run nimble develop Rename a file in the repository Forget to update the module import myrenamedfile into import newlocation/myrenamedfile Build will succeed yet will use import myrenamedfile from nimble this time..

    but it all works if you run nimble develop, right? I think that's expected and I don't see a limitation here (in the sense of making it impossible to do certain things). if you really feel something is ill-designed with nimble it really should be a bug report in https://github.com/nim-lang/nimble/issues/ otherwise it will never get fixed (if there's anything to fix).

    That being said, one possibility would be (and that's doable, not a fundamental flaw IMO): if you call nimble build inside a local package foo, nimble could remove from search path an installed package named foo

    opened by timotheecour 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    fragcolor-xyz/nimtorch now has a Chat Room on Gitter

    @sinkingsugar has just created a chat room. You can visit it here: https://gitter.im/nimtorch/Lobby.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 0
  • Is this project still active

    Is this project still active

    Hi,

    I think a working binding to pytorch from nim could be very valuable to support the use of nim in data science. This project seems to be inactive for a long time now and it is not using the current version of pytorch - any chance that it will be updated?

    opened by bitstormFA 5
  • Question. Import model trained under a Python / Torch library.

    Question. Import model trained under a Python / Torch library.

    Would it be possible or even advisable to import a pth or pkl, which was trained using FastAI, into NimTorch, for the purpose of exposing in a backend written in Nim (for efficiency and speed)?

    opened by UNIcodeX 17
  • Add

    Add "install fragments" to the non-conda installation doc

    add nimble install fragments to this part of the readme As a beginner I wasted an hour trying to figure out what did I do wrong, turns out it was just a package error

    opened by mritunjaymusale 0
  • Error: expression 'step(optimizer)' has no type (or is ambiguous)

    Error: expression 'step(optimizer)' has no type (or is ambiguous)

    I am trying to use nimtorch (I am new to pytorch as well). I am struggling to run a first example.

    I have installed in Windows 10 like this:

    conda create -n aten -c fragcolor aten=2019.02.16.2841
    nimble install fragments
    nimble install torch@#head
    

    And then I tried to compile the code from here:

    import torch
    import torch/[nn, optim]
    
    let
      inputs = torch.tensor([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]])
      targets = torch.tensor([[0.0], [1.0], [1.0], [0.0]])
    
    let
      fc1 = nn.Linear(2, 4)
      fc2 = nn.Linear(4, 1)
      loss_fn = nn.MSELoss()
      optimizer = optim.SGD(fc1.parameters & fc2.parameters, lr = 0.01, momentum = 0.1)
    
    for i in 0 ..< 50000:
      optimizer.zero_grad()
    
      var predictions = fc1(inputs).relu()
      predictions = fc2(predictions).sigmoid()
    
      let loss = loss_fn(predictions, targets)
      loss.backward()
      discard optimizer.step()
    
      if i mod 5000 == 0:
        print(loss)
    

    I compile by doing:

    c:> conda activate aten
    c.> nim cpp ex01
    ....
    C:\Users\mantielero\Documents\src\torch\ex01.nim(22, 25) Error: expression 'step(optimizer)' has no type (or is ambiguous)
    

    which is the line:

    discard optimizer.step()
    

    What am I doing wrong?

    opened by mantielero 0
  • Nimble installation fails

    Nimble installation fails

    Its been a while since the last commit, pls dont tell me nimtorch is dead, its wonderful, and i want to start using it, and libraries like this would pump up nim's popularity, are you leaving it for a while but planning on taking it back or just completely abandoned?

    opened by RecruitMain707 6
Releases(v0.2.0)
Owner
Giovanni Petrantoni
Founder and CEO @ Fragcolor
Giovanni Petrantoni
Rethinking Portrait Matting with Privacy Preserving

Rethinking Portrait Matting with Privacy Preserving This is the official repository of the paper Rethinking Portrait Matting with Privacy Preserving.

184 Jan 03, 2023
Estimation of human density in a closed space using deep learning.

Siemens HOLLZOF challenge - Human Density Estimation Add project description here. Installing Dependencies: Install Python3 either system-wide, user-w

3 Aug 08, 2021
Reimplement of SimSwap training code

SimSwap-train Reimplement of SimSwap training code Instructions 1.Environment Preparation (1)Refer to the README document of SIMSWAP to configure the

seeprettyface.com 111 Dec 31, 2022
exponential adaptive pooling for PyTorch

AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling Abstract Pooling layers are essential building blocks of Convolutional Ne

Alexandros Stergiou 55 Jan 04, 2023
Computationally efficient algorithm that identifies boundary points of a point cloud.

BoundaryTest Included are MATLAB and Python packages, each of which implement efficient algorithms for boundary detection and normal vector estimation

6 Dec 09, 2022
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Using NumPy to solve the equations of fluid mechanics together with Finite Differences, explicit time stepping and Chorin's Projection methods

Computational Fluid Dynamics in Python Using NumPy to solve the equations of fluid mechanics 🌊 🌊 🌊 together with Finite Differences, explicit time

Felix Köhler 4 Nov 12, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
Denoising images with Fourier Ring Correlation loss

Denoising images with Fourier Ring Correlation loss The python code accompanies the working manuscript Image quality measurements and denoising using

2 Mar 12, 2022
Domain Generalization for Mammography Detection via Multi-style and Multi-view Contrastive Learning

MSVCL_MICCAI2021 Installation Please follow the instruction in pytorch-CycleGAN-and-pix2pix to install. Example Usage An example of vendor-styles tran

Jaron Lee 11 Oct 19, 2022
FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

HKBU High Performance Machine Learning Lab 6 Nov 18, 2022
This is the latest version of the PULP SDK

PULP-SDK This is the latest version of the PULP SDK, which is under active development. The previous (now legacy) version, which is no longer supporte

78 Dec 07, 2022
Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation

Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation This repository contains the Pytorch implementation of the proposed

Devavrat Tomar 19 Nov 10, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
Worktory is a python library created with the single purpose of simplifying the inventory management of network automation scripts.

Worktory is a python library created with the single purpose of simplifying the inventory management of network automation scripts.

Renato Almeida de Oliveira 18 Aug 31, 2022
Code repo for "FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation" (ICCV 2021)

FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation (ICCV 2021) This repository contains the implementation of th

Yuhang Zang 21 Dec 17, 2022
Learning Multiresolution Matrix Factorization and its Wavelet Networks on Graphs

Project Learning Multiresolution Matrix Factorization and its Wavelet Networks on Graphs, https://arxiv.org/pdf/2111.01940.pdf. Authors Truong Son Hy

5 Jun 28, 2022
Unsupervised Representation Learning by Invariance Propagation

Unsupervised Learning by Invariance Propagation This repository is the official implementation of Unsupervised Learning by Invariance Propagation. Pre

FengWang 15 Jul 06, 2022
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Code in both PyTorch and TensorFlow

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context This repository contains the code in both PyTorch and TensorFlow for our paper

Zhilin Yang 3.3k Jan 06, 2023
PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.

PySlowFast PySlowFast is an open source video understanding codebase from FAIR that provides state-of-the-art video classification models with efficie

Meta Research 5.3k Jan 03, 2023