PyTorch wrapper for Taichi data-oriented class

Related tags

Deep Learningstannum
Overview

Stannum

PyTorch wrapper for Taichi data-oriented class

PRs are welcomed, please see TODOs.

Usage

from stannum import Tin
import torch

data_oriented = TiClass()  # some Taichi data-oriented class 
device = torch.device("cpu")
tin_layer = Tin(data_oriented, device=device)
    .register_kernel(data_oriented.forward_kernel)
    .register_input_field(data_oriented.input_field, True)
    .register_output_field(data_oriented.output_field, True)
    .register_weight_field(data_oriented.weight_field, True, name="field name")
    .finish() # finish() is required to finish construction
tin_layer.set_kernel_args(1.0)
output = tin_layer(input_tensor)

For input and output:

  • We can register multiple input_field, output_field, weight_field.
  • At least one input_field and one output_field should be registered.
  • The order of input tensors must match the registration order of input_fields.
  • The output order will align with the registration order of output_fields.

Installation & Dependencies

Install stannum with pip by

python -m pip install stannum

Make sure you have the following installed:

  • PyTorch
  • Taichi

TODOs

Documentation

  • Code documentation
  • Documentation for users
  • Nicer error messages

Engineering

  • Set up CI pipeline

Features

  • PyTorch-related:
    • PyTorch checkpoint and save model
    • Proxy torch.nn.parameter.Parameter for weight fields for optimizers
  • Python related:
    • @property for a data-oriented class as an alternative way to register
  • Taichi related:
    • Wait for Taichi to have native PyTorch tensor view to optimize performance
    • Automatic Batching - waiting for upstream Taichi improvement
      • workaround for now: do static manual batching, that is to extend fields with one more dimension for batching
  • Self:
    • Allow registering multiple kernels in a call chain fashion
      • workaround for now: combine kernels into a mega kernel using @ti.complex_kernel

Misc

  • A nice logo
Comments
  • Compatible changes for v1.1.0 rc

    Compatible changes for v1.1.0 rc

    We're in the process of getting v1.1.0 release rc wheel and noticed this PR is required for stannum to work v1.1.0.

    v1.1.0 tracking: https://github.com/taichi-dev/taichi/milestone/5

    opened by ailzhang 3
  • Get rid of eager mode

    Get rid of eager mode

    When the problems in https://github.com/taichi-dev/taichi/pull/4356 get fully resolved, we can safely get rid of the eager mode introduced in v0.5.0 without penalty on performance, reducing overhead.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 1
  • Get rid of clearing fields

    Get rid of clearing fields

    Once https://github.com/taichi-dev/taichi/issues/4334 and this https://github.com/taichi-dev/taichi/issues/4016 get resolved, we can get rid of auto_clear introduced in v0.4.4 and clearing in Tube to avoid unnecessary overhead.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 1
  • Flexible tensor shape support

    Flexible tensor shape support

    Now stannum only supports tensors with fixed shapes which are defined by shapes of registered fields. However, Taichi kernels are more flexible than that.

    For example, this simple kernel can handle 3 arrays of the same arbitrary length

    @ti.kernel
    def array_add(array0: ti.template(), array1: ti.template(), output_array: ti.template()):
        for i in range(array0.shape[0]):
            output_array[i] = array0[i] + array1[i]  
    

    But, we cannot do that with stannum now.

    Now I don't have a clear idea about how to implement this, but discussions and PRs are always welcomed.

    enhancement Taichi-related welcome_contribution 
    opened by ifsheldon 1
  • [bug fix] fix pip build no content

    [bug fix] fix pip build no content

    Previously there is missing a level in the src hierarchy, causing the source code not packaged into the whl build artifact, resulting in the package can be installed but can not be imported.

    This PR fixes this problem by restoring the correct code layout according to Python Packaging Tutorial. It creates a new src folder and move the stannum folder into it, and updates the folder name in setup.py.

    opened by jerrylususu 0
  • Dynamic output tensor shape

    Dynamic output tensor shape

    Hi ! I'm writing a convolution-like operator using Stannum. It can be used throughout a neural network, meaning each layer may have a different input/output shape. When trying to register the output tensor, it leads to this error: AssertionError: Dim = -1 is not allowed when registering output tensors but only registering input tensors

    Does it means I have to template and recompile the kernel for each layer of the neural network ?

    For reference, here is the whole kernel/tube construction:

    @ti.kernel
    def op_taichi(gamma: ti.template(), mu: ti.template(), c: ti.template(), input: ti.template(), weight_shape_1: int, weight_shape_2: int, weight_shape_3:int):
        ti.block_local(c, mu, gamma)
        for bi in range(input.shape[0]):
            for c0 in range(input.shape[1]):
                for i0 in range(input.shape[2]):
                    for j0 in range(input.shape[3]):
                        for i0p in range(input.shape[5]):
                            for j0p in range(input.shape[6]):
                                v = 0.
                                for ci in ti.static(range(weight_shape_1)):
                                    for ii in ti.static(range(weight_shape_2)):
                                        for ji in ti.static(range(weight_shape_3)):
                                            v += (mu[bi, ci, i0+ii, j0+ji] * mu[bi, ci, i0p+ii, j0p+ji] + gamma[bi, ci, i0+ii, j0+ji, ci, i0p+ii, j0p+ji])
                                input[bi, c0, i0, j0, c0, i0p, j0p] += c[c0] * v
        return input
    
    
    def conv2duf_taichi(input, gamma, mu, c, weight_shape):
        if c.dim() == 0:
            c = c.repeat(input.shape[1])
        global TUBE
        if TUBE is None:
            device = input.device # TODO dim alignment with -2, ...
            b = input.shape[0]
            tube = Tube(device) \
                .register_input_tensor((-1,)*7, input.dtype, "gamma", True) \
                .register_input_tensor((-1,)*4, input.dtype, "mu", True) \
                .register_input_tensor((-1,), input.dtype, "c", True) \
                .register_output_tensor((-1,)*7, input.dtype, "input", True) \
                .register_kernel(op_taichi, ["gamma", "mu", "c", "input"]) \
                .finish()
            TUBE = tube
        return TUBE(gamma, mu, c, input, weight_shape[1], weight_shape[2], weight_shape[3])
    
    opened by sebastienwood 5
  • How best to use Vector or Matrix fields?

    How best to use Vector or Matrix fields?

    Is this something worth adding? Happy to give it a go.

    I see this is kind of supported for complex types. Is it preferable to just convert scalar fields to vector fields (via indexing) in the kernel? I don't see any easy way of converting a n,m,3 field to a n,m vector3 field but I might be missing something?

    opened by oliver-batchelor 1
  • Memory and Performance issue of Taichi

    Memory and Performance issue of Taichi

    With the current Taichi (v0.9.1 - 1.2.1), calling Tube N times will result in N^2 time complexity because when creating a field Taichi need to inject kernel information into a field which leads to memory movement which is O(M) where M is the number of existing fields. The complexity is simply 1+2+3+....+N = O(N^2). This is not our fault. And Taichi developers are fixing it, although it takes quite some time.

    In the forward-only computation, this can be resolved by eagerly destroying fields and SNodeTree, which is included in stannum=0.6.2.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 4
  • Automatic batching

    Automatic batching

    Now stannum (and generally Taichi) cannot do automatic batch as done in PyTorch.

    For example, the below can only handle 3 arrays, but if we have a batch of arrays, we will have to loop over the batch dimension or change the code to support batches of a fixed size. This issue is somewhat related to issue #5. The ultimate goal should be supporting automatic batching with tensors of valid flexible shapes.

    @ti.kernel
    def array_add(self):
        for i in self.array0:
            self.output_array[i] = self.array0[i] + self.array1[i]  
    

    For the first step, dynamic looping (i.e. calling the kernel over and over again) is acceptable and is a good first issue.

    PRs and discussions are always welcomed.

    enhancement good first issue Taichi-related wait_for_upstream welcome_contribution 
    opened by ifsheldon 3
Releases(v0.8.0)
  • v0.8.0(Dec 28, 2022)

    Since last release:

    • A bug has been fix. The bug appears when after forward computation, if we update kernel extra args using set_kernel_extra_args (once or multiple times), backward computation is messed up due to inconsistent kernel inputs during forward and backward passes.
    • The APIs of Tin and EmptyTin have been changed: the constructors need auto_clear_grad specified, which is a reminder to users that gradients of fields must be taken care of carefully so to not have incorrect gradients after multiple runs of Tin or EmptyTin layers.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Sep 20, 2022)

    Nothing big has change in the code base of stannum, but since Taichi developers have delivered a long waited performance improvement, I want to urge everyone using stannum to update Taichi in use to 1.1.3. And some kind warning and documentation are added to help stannum users to understand this important upstream update.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.4(Aug 10, 2022)

  • v0.6.2(Mar 21, 2022)

    Introduced a configuration in Tube enable_backward. When enable_backward is False, Tube will eagerly recycle Taichi memory by destroying SNodeTree right after forward calculation. This should improve performance of forward-only calculations and should mitigate the memory problem of Taichi in forward-only mode.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Mar 8, 2022)

    • #7 is fixed because of upstream Taichi has fixed uninitialized memory problem in 0.9.1
    • Intermediate fields are now required to be batched if any input tensors are batched
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Feb 23, 2022)

    Persistent mode and Eager mode of Tube

    Before v0.5.0, the Taichi fields created in Tube is persistent and their lifetime is like: PyTorch upstream tensors -> Tube -> create fields -> forward pass -> copy values to downstream tensors -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed -> destroy fields

    They're so-called persistent fields as they persist when the compute graph is being constructed.

    Now in v0.5.0, we introduce an eager mode of Tube. With persistent_fields=False when instancing a Tube, eager mode is turned on, in which the lifetime of fields is like: PyTorch upstream tensors -> Tube -> fields -> copied values to downstream tensors -> destroy fields -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed

    Zooming in the optional backward pass, since we've destroyed fields that store values in the forward pass, we need to re-allocate new fields when calculating gradients, then the backward pass is like: Downstream gradients -> Tube -> create fields and load values -> load downstream gradients to fields -> backward pass -> copy gradients to tensors -> Destroy fields -> Upstream PyTorch gradient calculation

    This introduces some overhead but may be faster on "old" Taichi (any Taichi that does not merge https://github.com/taichi-dev/taichi/pull/4356). For details, please see this PR. At the time we release v0.5.0, stable Taichi does not merge this PR.

    Compatibility issue fixes

    At the time we release v0.5.0, Taichi has been being under refactoring heavily, so we introduced many small fixes to deal with incompatibilities caused by such refactoring. If you find compatibility issues, feel free to submit issues and make PRs.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.4(Feb 21, 2022)

    Fix many problems due to Taichi changes and bugs:

    • API import problems due to Taichi API changes
    • Memory uninit problem due to this https://github.com/taichi-dev/taichi/issues/4334 and this https://github.com/taichi-dev/taichi/issues/4016
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jan 14, 2022)

    Tube

    Tube is more flexible than Tin and slower in that it helps you create necessary fields and do automatic batching.

    Registrations

    All you need to do is to register:

    • Input/intermediate/output tensor shapes instead of fields
    • At least one kernel that takes the following as arguments
      • Taichi fields: correspond to tensors (may or may not require gradients)
      • (Optional) Extra arguments: will NOT receive gradients

    Acceptable dimensions of tensors to be registered:

    • None: means the flexible batch dimension, must be the first dimension e.g. (None, 2, 3, 4)
    • Positive integers: fixed dimensions with the indicated dimensionality
    • Negative integers:
      • -1: means any number [1, +inf), only usable in the registration of input tensors.
      • Negative integers < -1: indices of some dimensions that must be of the same dimensionality
        • Restriction: negative indices must be "declared" in the registration of input tensors first, then used in the registration of intermediate and output tensors.
        • Example 1: tensor a and b of shapes a: (2, -2, 3) and b: (-2, 5, 6) mean the dimensions of -2 must match.
        • Example 2: tensor a and b of shapes a: (-1, 2, 3) and b: (-1, 5, 6) mean no restrictions on the first dimensions.

    Registration order: Input tensors/intermediate fields/output tensors must be registered first, and then kernel.

    @ti.kernel
    def ti_add(arr_a: ti.template(), arr_b: ti.template(), output_arr: ti.template()):
        for i in arr_a:
            output_arr[i] = arr_a[i] + arr_b[i]
    
    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    a = torch.ones(10)
    b = torch.ones(10)
    tube = Tube(cpu) \
        .register_input_tensor((10,), torch.float32, "arr_a", False) \
        .register_input_tensor((10,), torch.float32, "arr_b", False) \
        .register_output_tensor((10,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    out = tube(a, b)
    

    When registering a kernel, a list of field/tensor names is required, for example, the above ["arr_a", "arr_b", "output_arr"]. This list should correspond to the fields in the arguments of a kernel (e.g. above ti_add()).

    The order of input tensors should match the input fields of a kernel.

    Automatic batching

    Automatic batching is done simply by running kernels batch times. The batch number is determined by the leading dimension of tensors of registered shape (None, ...).

    It's required that if any input tensors or intermediate fields are batched (which means they have registered the first dimension to be None), all output tensors must be registered as batched.

    Examples

    Simple one without negative indices or batch dimension:

    @ti.kernel
    def ti_add(arr_a: ti.template(), arr_b: ti.template(), output_arr: ti.template()):
        for i in arr_a:
            output_arr[i] = arr_a[i] + arr_b[i]
    
    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    a = torch.ones(10)
    b = torch.ones(10)
    tube = Tube(cpu) \
        .register_input_tensor((10,), torch.float32, "arr_a", False) \
        .register_input_tensor((10,), torch.float32, "arr_b", False) \
        .register_output_tensor((10,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    out = tube(a, b)
    

    With negative dimension index:

    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    tube = Tube(cpu) \
        .register_input_tensor((-2,), torch.float32, "arr_a", False) \
        .register_input_tensor((-2,), torch.float32, "arr_b", False) \
        .register_output_tensor((-2,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    dim = 10
    a = torch.ones(dim)
    b = torch.ones(dim)
    out = tube(a, b)
    assert torch.allclose(out, torch.full((dim,), 2.))
    dim = 100
    a = torch.ones(dim)
    b = torch.ones(dim)
    out = tube(a, b)
    assert torch.allclose(out, torch.full((dim,), 2.))
    

    With batch dimension:

    @ti.kernel
    def int_add(a: ti.template(), b: ti.template(), out: ti.template()):
        out[None] = a[None] + b[None]
    
    ti.init(ti.cpu)
    b = torch.tensor(1., requires_grad=True)
    batched_a = torch.ones(10, requires_grad=True)
    tube = Tube() \
        .register_input_tensor((None,), torch.float32, "a") \
        .register_input_tensor((), torch.float32, "b") \
        .register_output_tensor((None,), torch.float32, "out", True) \
        .register_kernel(int_add, ["a", "b", "out"]) \
        .finish()
    out = tube(batched_a, b)
    loss = out.sum()
    loss.backward()
    assert torch.allclose(torch.ones_like(batched_a) + 1, out)
    assert b.grad == 10.
    assert torch.allclose(torch.ones_like(batched_a), batched_a.grad)
    

    For more invalid use examples, please see tests in tests/test_tube.

    Advanced field construction with FieldManager

    There is a way to tweak how fields are constructed in order to gain performance improvement in kernel calculations.

    By supplying a customized FieldManager when registering a field, you can construct a field however you want.

    Please refer to the code FieldManger in src/stannum/auxiliary.py for more information.

    If you don't know why constructing fields differently can improve performance, don't use this feature.

    If you don't know how to construct fields differently, please refer to Taichi field documentation.

    Source code(tar.gz)
    Source code(zip)
    stannum-0.4.0-py3-none-any.whl(15.81 KB)
  • v0.3.2(Jan 1, 2022)

  • v0.3.1(Dec 30, 2021)

    Fix a bug.

    Details: When some of input fields or internal fields do not need gradient (i.e. needs_grad==False), incorrect numbers of backward gradients will be passed to PyTorch Autograd, crashing back propagation.

    Source code(tar.gz)
    Source code(zip)
  • v0.3(Dec 30, 2021)

    New feature:

    • Add complex tensor support: need to specify that a field expects a complex tensor as data source
      tin_layer = Tin(data_oriented_vector_field, device) \
            .register_kernel(data_oriented_vector_field.forward_kernel, 1.0) \
            .register_input_field(data_oriented_vector_field.input_field, complex_dtype=True) \
            .register_output_field(data_oriented_vector_field.output_field, complex_dtype=True) \
            .register_internal_field(data_oriented_vector_field.multiplier) \
            .finish()
      

    Engineering:

    • Refactored code a bit
    • Add type hints to enhance code readability
    Source code(tar.gz)
    Source code(zip)
    stannum-0.3.0-py3-none-any.whl(7.00 KB)
    stannum-0.3.0.tar.gz(7.52 KB)
  • v0.2(Aug 1, 2021)

    Now you can register multiple kernels. These kernels will be called sequentially with the same order of registration. Please be noted that all fields needed to store intermediate results must be register.

    API changes:

    • Tin.register_weight_field() -> Tin.register_internal_field()
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Jul 14, 2021)

  • v0.1.2(Jul 13, 2021)

    Now you don't need to specify needs_grad when registering a field via .register_*_field(), as long as you use Taichi > 0.7.26. If you use a legacy version of Taichi, you must still specify needs_grad yourself, though.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jul 11, 2021)

  • v0.1(Jul 9, 2021)

StellarGraph - Machine Learning on Graphs

StellarGraph Machine Learning Library StellarGraph is a Python library for machine learning on graphs and networks. Table of Contents Introduction Get

S T E L L A R 2.6k Jan 05, 2023
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems

WideLinears Pytorch parallel Neural Networks A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-b

1 Dec 17, 2021
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT

PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT

NVIDIA Corporation 1.8k Dec 30, 2022
Personals scripts using ageitgey/face_recognition

HOW TO USE pip3 install requirements.txt Add some pictures of known people in the folder 'people' : a) Create a folder called by the name of the perso

Antoine Bollengier 1 Jan 06, 2022
Learn other languages ​​using artificial intelligence with python.

The main idea of ​​the project is to facilitate the learning of other languages. We created a simple AI that will interact with you. Just ask questions that if she knows, she will answer.

Pedro Rodrigues 2 Jun 07, 2022
Subgraph Based Learning of Contextual Embedding

SLiCE Self-Supervised Learning of Contextual Embeddings for Link Prediction in Heterogeneous Networks Dataset details: We use four public benchmark da

Pacific Northwest National Laboratory 27 Dec 01, 2022
A collection of resources on GAN Inversion.

This repo is a collection of resources on GAN inversion, as a supplement for our survey

Multi-Template Mouse Brain MRI Atlas (MBMA): both in-vivo and ex-vivo

Multi-template MRI mouse brain atlas (both in vivo and ex vivo) Mouse Brain MRI atlas (both in-vivo and ex-vivo) (repository relocated from the origin

8 Nov 18, 2022
Official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space

NeuralFusion This is the official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space. We provide code to train the proposed pipel

53 Jan 01, 2023
The Environment I built to study Reinforcement Learning + Pokemon Showdown

pokemon-showdown-rl-environment The Environment I built to study Reinforcement Learning + Pokemon Showdown Been a while since I ran this. Think it is

3 Jan 16, 2022
Code for "Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation" ICCV'21

Skeletal-GNN Code for "Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation" ICCV'21 Various deep learning techniques have been propose

37 Oct 23, 2022
[SIGIR22] Official PyTorch implementation for "CORE: Simple and Effective Session-based Recommendation within Consistent Representation Space".

CORE This is the official PyTorch implementation for the paper: Yupeng Hou, Binbin Hu, Zhiqiang Zhang, Wayne Xin Zhao. CORE: Simple and Effective Sess

RUCAIBox 26 Dec 19, 2022
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 06, 2023
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
An educational tool to introduce AI planning concepts using mobile manipulator robots.

JEDAI Explains Decision-Making AI Virtual Machine Image The recommended way of using JEDAI is to use pre-configured Virtual Machine image that is avai

Autonomous Agents and Intelligent Robots 13 Nov 15, 2022
DROPO: Sim-to-Real Transfer with Offline Domain Randomization

DROPO: Sim-to-Real Transfer with Offline Domain Randomization Gabriele Tiboni, Karol Arndt, Ville Kyrki. This repository contains the code for the pap

Gabriele Tiboni 8 Dec 19, 2022