PyTorch - Python + Nim

Overview

NimTorch

Master Release
Build Status Build Status

Pytorch - Py + Nim

A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Because Nim compiles to C++, this is not a wrapper or binding library. It generates 1-to-1 native ATen code.

The only requirement from pytorch is ATen's core tensor library. Because of this, nimtorch is extremely versatile and can compile on any kind of device.

Current status

Early stage

  • Automatically generated, from Declarations.yaml, the full ATen API
  • Cuda support ( add -d:cuda when compiling with nim )
  • WASM support ( add -d:wasm when compiling with nim )
  • Automatically generated, from derivatives.yaml, gradient procs
  • Autograd
  • Add missing derivatives
  • More high level pytorch API (Module, Models etc)
  • ...

The final aim is to be as compatible as possible with the pytorch API.

Why

Ease of use of the python language while keeping fully bare metal native C++ performance

Python code

# GRUCell
gi = x.matmul(w_input.t()) + b_input
gh = hidden.matmul(w_recur.t()) + b_recur
i_r, i_i, i_n = gi.chunk(3, 1)
h_r, h_i, h_n = gh.chunk(3, 1)
resetgate = (i_r + h_r).sigmoid()
inputgate = torch.sigmoid(i_i + h_i)
newgate = (i_n + resetgate * h_n).tanh()
hy = newgate + inputgate * (hidden - newgate)

Nim code

# GRUCell
let
  gi = x.matmul(w_input.t()) + b_input
  gh = hidden.matmul(w_recur.t()) + b_recur
  (i_r, i_i, i_nn) = gi.chunk(3, 1)
  (h_r, h_i, h_n)  = gh.chunk(3, 1)
  resetgate = (i_r + h_r).sigmoid()
  inputgate = torch.sigmoid(i_i + h_i)
  newgate = (i_nn + resetgate * h_n).tanh()
  hy = newgate + inputgate * (hidden - newgate)

Getting started

Requirements

Linux: A recent distribution on par with ubuntu 18.04 in terms of libc and basic libraries, gcc compiler

macOS: We compile with 10.13 min version flags but might work even on lower versions, XCode for the compilers

Windows: Windows 10, Visual Studio Runtime 2017 and Visual Studio 2017 (any edition)

WASM: Latest Emscripten compiler and tools

Super easy, using conda

Linux, macOS and Windows

conda create -n nimtorch -c fragcolor nimtorch (add cuda10.0 for cuda 10 linux only or add wasm for wasm version)

source activate nimtorch or on windows: conda activate nimtorch

This will install: nim and ATen binaries, fragments and nimtorch all in one command, nothing else needed.

Make sure you use a recent version of conda and have a compiler installed in your system, on windows you have to add --cc:vcc and be on a developer prompt.

Make sure your system is recent (ubuntu 18.04 reference / macOS High Sierra / Windows 10) and you have cuda 9.2 installed (if you need cuda, linux only, more cuda versions coming, please open a issue if you need a specific version).

Test with with something like:

nim cpp -o:test -r $ATEN/dist/pkgs/nimtorch-\#head/tests/test_xor.nim

or on windows... (because dlls need to be side by side)

nim cpp -o:%ATEN%/lib/test.exe -r %ATEN%/dist/pkgs/nimtorch-#head/tests/test_xor.nim

Semi manual way

Linux, macOS and Windows

Check what version of ATen/PyTorch we need in conda/nimtorch/meta.yaml - should be something like aten ==2018.10.10.1089

Note the version as you will need it in the next step

conda create -n aten -c fragcolor aten={version}

or

WASM

conda create -n aten -c fragcolor aten={version} wasm

or Cuda 10.0 (linux only)

conda create -n aten -c fragcolor aten={version} cuda10.0

activate aten environment

source activate aten or on windows: conda activate aten

  1. Make sure you have a recent Nim and Nimble version in your path
  1. clone the release branch git clone -b release https://github.com/fragcolor-xyz/nimtorch.git
  2. cd nimtorch
  3. nimble develop

finally

run self test nim cpp -o:test -r torch.nim (use -o:%ATEN%/lib/test.exe instead on windows because of dll location)

in the case of WASM:

run self test nim cpp -d:wasm -o:test.js torch.nim && node test.js (needs node.js)

Manual way without requiring conda

Build ATEN

pip2 install pyyaml typing
git clone -b fragcolor-devel https://github.com/fragcolor-xyz/pytorch.git
cd pytorch
git reset --hard <commit hash> # from torch/commit.txt
git submodule update --init
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DUSE_CUDA=OFF -DBUILD_ATEN_ONLY=ON -DCMAKE_INSTALL_PREFIX=`pwd`/output ../
make -j4
make install

# also copy derivatives if we want to run generator.nim or nimble test
# notice generator.nim might need python3 and pyyaml
cp ../tools/autograd/derivatives.yaml `pwd`/output/share/

Test the build

cd <nimtorch repo>
ATEN=<installation path of ATEN> nim cpp -r -f -o:/tmp/z01 torch.nim # for eg: ATEN=pathto/pytorch/build/output/

Notes

  • We suggest setting OMP_WAIT_POLICY environment variable to PASSIVE when running on CPU.
Comments
  • OSX: `nim cpp -r torch.nim` fails

    OSX: `nim cpp -r torch.nim` fails

    after building ATEN via https://github.com/fragcolor-xyz/nimtorch/issues/5#issuecomment-427937945:

    ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/lib nim cpp -r torch.nim
    error: unknown type name 'constexpr'
    
    ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output nim cpp -r --passC:-std=c++11 torch.nim
    /Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:206:14: error: no matching constructor for initialization of 'at::IntList' (aka 'ArrayRef<long long>')
            at::IntList temp(((long*) ((&self[((NI) 0)]))), selfLen_0);
                        ^    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/include/ATen/core/ArrayRef.h:67:13: note: candidate constructor not viable: no known conversion from 'long *' to 'const long long *' for 1st argument
      constexpr ArrayRef(const T* data, size_t length)
    
    
    question macOS 
    opened by timotheecour 11
  • OSX: building ATEN via `docker build -t docker_aten_native .` fails

    OSX: building ATEN via `docker build -t docker_aten_native .` fails

    Download ATen binaries or build it (instructions under)

    => no OSX option

    ATen build instructions

    cd docker && cd docker-aten-native

    is that a typo?

    is my best bet to try to follow instructions from https://github.com/pytorch/pytorch or https://github.com/pytorch/pytorch/tree/master/aten ?

    enhancement macOS 
    opened by timotheecour 11
  • SIGSEGV when running examples

    SIGSEGV when running examples

    Using the latest nimtorch, I'm getting a SIGSEGV when trying to compile one of the examples (test_xor).

    I'm not sure if this is a compiler bug or a problem with Aten?

    Dockerfile:

    FROM continuumio/miniconda
    
    RUN conda install -y -c fragcolor nimtorch
    
    ADD test.nim .
    
    docker build -t nimtorch_conda .
    docker run --rm nimtorch_conda nim c test.nim
    #Hint: used config file '/opt/conda/config/nim.cfg' [Conf]
    #Hint: used config file '/opt/conda/config/config.nims' [Conf]
    #Hint: system [Processing]
    #Hint: widestrs [Processing]
    #Hint: io [Processing]
    #Hint: test [Processing]
    #Hint: torch [Processing]
    #Hint: macros [Processing]
    #Hint: cpp [Processing]
    #Hint: nimline [Processing]
    #Hint: tables [Processing]
    #Hint: hashes [Processing]
    #Hint: strutils [Processing]
    #Hint: parseutils [Processing]
    #Hint: math [Processing]
    #Hint: bitops [Processing]
    #Hint: algorithm [Processing]
    #Hint: unicode [Processing]
    #Hint: os [Processing]
    #Hint: pathnorm [Processing]
    #Hint: osseps [Processing]
    #Hint: posix [Processing]
    #Hint: times [Processing]
    #Hint: options [Processing]
    #Hint: typetraits [Processing]
    #Hint: torch_cpp [Processing]
    #Hint: tensors [Processing]
    #Hint: sequtils [Processing]
    #Hint: sets [Processing]
    #Hint: strformat [Processing]
    #Hint: tensor_ops [Processing]
    #Hint: autograd_macro [Processing]
    #Hint: autograd_backward [Processing]
    #Hint: nn [Processing]
    #Hint: modules [Processing]
    #Hint: init [Processing]
    #Hint: python_helpers [Processing]
    #Hint: functional [Processing]
    #SIGSEGV: Illegal storage access. (Attempt to read from nil?)
    

    test.nim:

    import torch
    import torch/[nn, optim]
    
    let inputs = torch.tensor([
      [0.0, 0.0],
      [0.0, 1.0],
      [1.0, 0.0],
      [1.0, 1.0],
    ])
    
    let targets = torch.tensor([
      [0.0],
      [1.0],
      [1.0],
      [0.0],
    ])
    
    let
      fc1 = nn.Linear(2, 4)
      fc2 = nn.Linear(4, 1)
      loss_fn = nn.MSELoss()
      optimizer = optim.SGD(fc1.parameters & fc2.parameters , lr = 0.01, momentum = 0.1)
    
    set_num_threads(1)
    
    when defined gperftools:
      discard ProfilerStart("test_xor.log")
    
    for i in 0 ..< 50000:
      optimizer.zero_grad()
    
      let predictions = inputs.fc1.relu.fc2.sigmoid
    
      let loss = loss_fn(predictions, targets)
      loss.backward()
      optimizer.step()
    
      if i mod 5000 == 0:
        print(loss)
    
    when defined gperftools:
      ProfilerStop()
    
    opened by singularperturbation 4
  • can't install with nimble

    can't install with nimble

    $ nim --version Nim Compiler Version 0.18.1 [Windows: amd64] Compiled at 2018-08-18 Copyright (c) 2006-2018 by Andreas Rumpf

    git hash: b5171f57ef00bffb12387d7daf3487c5e07645f9 active boot switches: -d:release

    $ nimble install nimtorch Prompt: nimtorch not found in any local packages.json, check internet for updated packages? [y/N] y Answer: Downloading Official package list Success Package list downloaded. Tip: 3 messages have been suppressed, use --verbose to show them. Error: Package not found.

    opened by retsyo 3
  • Cannot compile nimtorch tests on Windows 10

    Cannot compile nimtorch tests on Windows 10

    C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\ATen_cpu.lib
    C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\cpuinfo.lib
    collect2.exe: error: ld returned 1 exit status
    Error: execution of an external program failed: 'g++.exe   -o C:\Users\vsagar200\work\soft\nimtorch\tests\test_xor.exe  C:\Users\vsagar200\nimcache\test_xor_d\torch_test_xor.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_system.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_torch.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\fragments_cpp.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_macros.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_tables.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_hashes.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_math.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_strutils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_parseutils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_bitops.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_algorithm.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_unicode.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_ospaths.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_winlean.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_dynlib.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_torch_cpp.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_os.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_times.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_options.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_typetraits.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_strformat.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_tensors.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_sequtils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_sets.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_tensor_ops.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_autograd_macro.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_autograd_backward.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_nn.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_init.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_python_helpers.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_optim.cpp.o  -LC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib -LC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib64 -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\ATen_cpu.lib -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\cpuinfo.lib '
    

    Although the ATEN path is correctly set. Tried on nim 0.18.1 and 0.19.0 the .lib files present at the same location the error is saying it cannot find.

    question 
    opened by eshitasagar 2
  • install Aten with nimtorch

    install Aten with nimtorch

    thanks for the library. are there any plans to have Aten distributed with nim-torch? This will make it easier to build command-line tools that depend on it.

    opened by brentp 1
  • [TODO] [offtopic] discussion regarding nimble limitation (from comments in #6)

    [TODO] [offtopic] discussion regarding nimble limitation (from comments in #6)

    moved to here the discussion started here https://github.com/fragcolor-xyz/nimtorch/issues/6#issuecomment-430511494 to keep each topic separate

    @sinkingsugar

    Nimble has a major flaw, which is it applies all the packages it has on every project you have by default. That's my major concern that easy can create a mess if not considered.

    I will give you an example exactly with nimtorch: Have nimtorch installed as nimble package Also work on the repository Did not run nimble develop Rename a file in the repository Forget to update the module import myrenamedfile into import newlocation/myrenamedfile Build will succeed yet will use import myrenamedfile from nimble this time..

    but it all works if you run nimble develop, right? I think that's expected and I don't see a limitation here (in the sense of making it impossible to do certain things). if you really feel something is ill-designed with nimble it really should be a bug report in https://github.com/nim-lang/nimble/issues/ otherwise it will never get fixed (if there's anything to fix).

    That being said, one possibility would be (and that's doable, not a fundamental flaw IMO): if you call nimble build inside a local package foo, nimble could remove from search path an installed package named foo

    opened by timotheecour 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    fragcolor-xyz/nimtorch now has a Chat Room on Gitter

    @sinkingsugar has just created a chat room. You can visit it here: https://gitter.im/nimtorch/Lobby.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 0
  • Is this project still active

    Is this project still active

    Hi,

    I think a working binding to pytorch from nim could be very valuable to support the use of nim in data science. This project seems to be inactive for a long time now and it is not using the current version of pytorch - any chance that it will be updated?

    opened by bitstormFA 5
  • Question. Import model trained under a Python / Torch library.

    Question. Import model trained under a Python / Torch library.

    Would it be possible or even advisable to import a pth or pkl, which was trained using FastAI, into NimTorch, for the purpose of exposing in a backend written in Nim (for efficiency and speed)?

    opened by UNIcodeX 17
  • Add

    Add "install fragments" to the non-conda installation doc

    add nimble install fragments to this part of the readme As a beginner I wasted an hour trying to figure out what did I do wrong, turns out it was just a package error

    opened by mritunjaymusale 0
  • Error: expression 'step(optimizer)' has no type (or is ambiguous)

    Error: expression 'step(optimizer)' has no type (or is ambiguous)

    I am trying to use nimtorch (I am new to pytorch as well). I am struggling to run a first example.

    I have installed in Windows 10 like this:

    conda create -n aten -c fragcolor aten=2019.02.16.2841
    nimble install fragments
    nimble install torch@#head
    

    And then I tried to compile the code from here:

    import torch
    import torch/[nn, optim]
    
    let
      inputs = torch.tensor([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]])
      targets = torch.tensor([[0.0], [1.0], [1.0], [0.0]])
    
    let
      fc1 = nn.Linear(2, 4)
      fc2 = nn.Linear(4, 1)
      loss_fn = nn.MSELoss()
      optimizer = optim.SGD(fc1.parameters & fc2.parameters, lr = 0.01, momentum = 0.1)
    
    for i in 0 ..< 50000:
      optimizer.zero_grad()
    
      var predictions = fc1(inputs).relu()
      predictions = fc2(predictions).sigmoid()
    
      let loss = loss_fn(predictions, targets)
      loss.backward()
      discard optimizer.step()
    
      if i mod 5000 == 0:
        print(loss)
    

    I compile by doing:

    c:> conda activate aten
    c.> nim cpp ex01
    ....
    C:\Users\mantielero\Documents\src\torch\ex01.nim(22, 25) Error: expression 'step(optimizer)' has no type (or is ambiguous)
    

    which is the line:

    discard optimizer.step()
    

    What am I doing wrong?

    opened by mantielero 0
  • Nimble installation fails

    Nimble installation fails

    Its been a while since the last commit, pls dont tell me nimtorch is dead, its wonderful, and i want to start using it, and libraries like this would pump up nim's popularity, are you leaving it for a while but planning on taking it back or just completely abandoned?

    opened by RecruitMain707 6
Releases(v0.2.0)
Owner
Giovanni Petrantoni
Giovanni Petrantoni
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Resources complimenting the Machine Learning Course led in the Faculty of mathematics and informatics part of Sofia University.

Machine Learning and Data Mining, Summer 2021-2022 How to learn data science and machine learning? Programming. Learn Python. Basic Statistics. Take a

Simeon Hristov 8 Oct 04, 2022
Tool for live presentations using manim

manim-presentation Tool for live presentations using manim Install pip install manim-presentation opencv-python Usage Use the class Slide as your sce

Federico Galatolo 146 Jan 06, 2023
Public Models considered for emotion estimation from EEG

Emotion-EEG Set of models for emotion estimation from EEG. Composed by the combination of two deep-learing models learning together (RNN and CNN) with

Victor Delvigne 21 Dec 23, 2022
Deep-Learning-Image-Captioning - Implementing convolutional and recurrent neural networks in Keras to generate sentence descriptions of images

Deep Learning - Image Captioning with Convolutional and Recurrent Neural Nets ========================================================================

23 Apr 06, 2022
Cognition-aware Cognate Detection

Cognition-aware Cognate Detection The repository which contains our code for our EACL 2021 paper titled, "Cognition-aware Cognate Detection". This wor

Prashant K. Sharma 1 Feb 01, 2022
Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks

Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks by Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, and J

Ángel López García-Arias 4 May 19, 2022
Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”

VectorNet Re-implementation This is the unofficial pytorch implementation of CVPR2020 paper "VectorNet: Encoding HD Maps and Agent Dynamics from Vecto

120 Jan 06, 2023
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
Make a surveillance camera from your raspberry pi!

rpi-surveillance Make a surveillance camera from your Raspberry Pi 4! The surveillance is built as following: the camera records 10 seconds video and

Vladyslav 62 Feb 03, 2022
DanceTrack: Multiple Object Tracking in Uniform Appearance and Diverse Motion

DanceTrack DanceTrack is a benchmark for tracking multiple objects in uniform appearance and diverse motion. DanceTrack provides box and identity anno

260 Dec 28, 2022
A semantic segmentation toolbox based on PyTorch

Introduction vedaseg is an open source semantic segmentation toolbox based on PyTorch. Features Modular Design We decompose the semantic segmentation

407 Dec 15, 2022
Fast, accurate and reliable software for algebraic CT reconstruction

KCT CBCT Fast, accurate and reliable software for algebraic CT reconstruction. This set of software tools includes OpenCL implementation of modern CT

Vojtěch Kulvait 4 Dec 14, 2022
Python code for the paper How to scale hyperparameters for quickshift image segmentation

How to scale hyperparameters for quickshift image segmentation Python code for the paper How to scale hyperparameters for quickshift image segmentatio

0 Jan 25, 2022
Implementation of Artificial Neural Network Algorithm

Artificial Neural Network This repository contain implementation of Artificial Neural Network Algorithm in several programming languanges and framewor

Resha Dwika Hefni Al-Fahsi 1 Sep 14, 2022
This library provides an abstraction to perform Model Versioning using Weight & Biases.

Description This library provides an abstraction to perform Model Versioning using Weight & Biases. Features Version a new trained model Promote a mod

Hector Lopez Almazan 2 Jan 28, 2022
MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity

MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity Introduction The 3D LiDAR place recognition aim

16 Dec 08, 2022
Programming with Neural Surrogates of Programs

Programming with Neural Surrogates of Programs

0 Dec 12, 2021
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021 Global Pooling, More than Meets the Eye: Posi

Md Amirul Islam 32 Apr 24, 2022
Faune proche - Retrieval of Faune-France data near a google maps location

faune_proche Récupération des données de Faune-France près d'un lieu google maps

4 Feb 15, 2022