PyTorch wrapper for Taichi data-oriented class

Related tags

Deep Learningstannum
Overview

Stannum

PyTorch wrapper for Taichi data-oriented class

PRs are welcomed, please see TODOs.

Usage

from stannum import Tin
import torch

data_oriented = TiClass()  # some Taichi data-oriented class 
device = torch.device("cpu")
tin_layer = Tin(data_oriented, device=device)
    .register_kernel(data_oriented.forward_kernel)
    .register_input_field(data_oriented.input_field, True)
    .register_output_field(data_oriented.output_field, True)
    .register_weight_field(data_oriented.weight_field, True, name="field name")
    .finish() # finish() is required to finish construction
tin_layer.set_kernel_args(1.0)
output = tin_layer(input_tensor)

For input and output:

  • We can register multiple input_field, output_field, weight_field.
  • At least one input_field and one output_field should be registered.
  • The order of input tensors must match the registration order of input_fields.
  • The output order will align with the registration order of output_fields.

Installation & Dependencies

Install stannum with pip by

python -m pip install stannum

Make sure you have the following installed:

  • PyTorch
  • Taichi

TODOs

Documentation

  • Code documentation
  • Documentation for users
  • Nicer error messages

Engineering

  • Set up CI pipeline

Features

  • PyTorch-related:
    • PyTorch checkpoint and save model
    • Proxy torch.nn.parameter.Parameter for weight fields for optimizers
  • Python related:
    • @property for a data-oriented class as an alternative way to register
  • Taichi related:
    • Wait for Taichi to have native PyTorch tensor view to optimize performance
    • Automatic Batching - waiting for upstream Taichi improvement
      • workaround for now: do static manual batching, that is to extend fields with one more dimension for batching
  • Self:
    • Allow registering multiple kernels in a call chain fashion
      • workaround for now: combine kernels into a mega kernel using @ti.complex_kernel

Misc

  • A nice logo
Comments
  • Compatible changes for v1.1.0 rc

    Compatible changes for v1.1.0 rc

    We're in the process of getting v1.1.0 release rc wheel and noticed this PR is required for stannum to work v1.1.0.

    v1.1.0 tracking: https://github.com/taichi-dev/taichi/milestone/5

    opened by ailzhang 3
  • Get rid of eager mode

    Get rid of eager mode

    When the problems in https://github.com/taichi-dev/taichi/pull/4356 get fully resolved, we can safely get rid of the eager mode introduced in v0.5.0 without penalty on performance, reducing overhead.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 1
  • Get rid of clearing fields

    Get rid of clearing fields

    Once https://github.com/taichi-dev/taichi/issues/4334 and this https://github.com/taichi-dev/taichi/issues/4016 get resolved, we can get rid of auto_clear introduced in v0.4.4 and clearing in Tube to avoid unnecessary overhead.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 1
  • Flexible tensor shape support

    Flexible tensor shape support

    Now stannum only supports tensors with fixed shapes which are defined by shapes of registered fields. However, Taichi kernels are more flexible than that.

    For example, this simple kernel can handle 3 arrays of the same arbitrary length

    @ti.kernel
    def array_add(array0: ti.template(), array1: ti.template(), output_array: ti.template()):
        for i in range(array0.shape[0]):
            output_array[i] = array0[i] + array1[i]  
    

    But, we cannot do that with stannum now.

    Now I don't have a clear idea about how to implement this, but discussions and PRs are always welcomed.

    enhancement Taichi-related welcome_contribution 
    opened by ifsheldon 1
  • [bug fix] fix pip build no content

    [bug fix] fix pip build no content

    Previously there is missing a level in the src hierarchy, causing the source code not packaged into the whl build artifact, resulting in the package can be installed but can not be imported.

    This PR fixes this problem by restoring the correct code layout according to Python Packaging Tutorial. It creates a new src folder and move the stannum folder into it, and updates the folder name in setup.py.

    opened by jerrylususu 0
  • Dynamic output tensor shape

    Dynamic output tensor shape

    Hi ! I'm writing a convolution-like operator using Stannum. It can be used throughout a neural network, meaning each layer may have a different input/output shape. When trying to register the output tensor, it leads to this error: AssertionError: Dim = -1 is not allowed when registering output tensors but only registering input tensors

    Does it means I have to template and recompile the kernel for each layer of the neural network ?

    For reference, here is the whole kernel/tube construction:

    @ti.kernel
    def op_taichi(gamma: ti.template(), mu: ti.template(), c: ti.template(), input: ti.template(), weight_shape_1: int, weight_shape_2: int, weight_shape_3:int):
        ti.block_local(c, mu, gamma)
        for bi in range(input.shape[0]):
            for c0 in range(input.shape[1]):
                for i0 in range(input.shape[2]):
                    for j0 in range(input.shape[3]):
                        for i0p in range(input.shape[5]):
                            for j0p in range(input.shape[6]):
                                v = 0.
                                for ci in ti.static(range(weight_shape_1)):
                                    for ii in ti.static(range(weight_shape_2)):
                                        for ji in ti.static(range(weight_shape_3)):
                                            v += (mu[bi, ci, i0+ii, j0+ji] * mu[bi, ci, i0p+ii, j0p+ji] + gamma[bi, ci, i0+ii, j0+ji, ci, i0p+ii, j0p+ji])
                                input[bi, c0, i0, j0, c0, i0p, j0p] += c[c0] * v
        return input
    
    
    def conv2duf_taichi(input, gamma, mu, c, weight_shape):
        if c.dim() == 0:
            c = c.repeat(input.shape[1])
        global TUBE
        if TUBE is None:
            device = input.device # TODO dim alignment with -2, ...
            b = input.shape[0]
            tube = Tube(device) \
                .register_input_tensor((-1,)*7, input.dtype, "gamma", True) \
                .register_input_tensor((-1,)*4, input.dtype, "mu", True) \
                .register_input_tensor((-1,), input.dtype, "c", True) \
                .register_output_tensor((-1,)*7, input.dtype, "input", True) \
                .register_kernel(op_taichi, ["gamma", "mu", "c", "input"]) \
                .finish()
            TUBE = tube
        return TUBE(gamma, mu, c, input, weight_shape[1], weight_shape[2], weight_shape[3])
    
    opened by sebastienwood 5
  • How best to use Vector or Matrix fields?

    How best to use Vector or Matrix fields?

    Is this something worth adding? Happy to give it a go.

    I see this is kind of supported for complex types. Is it preferable to just convert scalar fields to vector fields (via indexing) in the kernel? I don't see any easy way of converting a n,m,3 field to a n,m vector3 field but I might be missing something?

    opened by oliver-batchelor 1
  • Memory and Performance issue of Taichi

    Memory and Performance issue of Taichi

    With the current Taichi (v0.9.1 - 1.2.1), calling Tube N times will result in N^2 time complexity because when creating a field Taichi need to inject kernel information into a field which leads to memory movement which is O(M) where M is the number of existing fields. The complexity is simply 1+2+3+....+N = O(N^2). This is not our fault. And Taichi developers are fixing it, although it takes quite some time.

    In the forward-only computation, this can be resolved by eagerly destroying fields and SNodeTree, which is included in stannum=0.6.2.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 4
  • Automatic batching

    Automatic batching

    Now stannum (and generally Taichi) cannot do automatic batch as done in PyTorch.

    For example, the below can only handle 3 arrays, but if we have a batch of arrays, we will have to loop over the batch dimension or change the code to support batches of a fixed size. This issue is somewhat related to issue #5. The ultimate goal should be supporting automatic batching with tensors of valid flexible shapes.

    @ti.kernel
    def array_add(self):
        for i in self.array0:
            self.output_array[i] = self.array0[i] + self.array1[i]  
    

    For the first step, dynamic looping (i.e. calling the kernel over and over again) is acceptable and is a good first issue.

    PRs and discussions are always welcomed.

    enhancement good first issue Taichi-related wait_for_upstream welcome_contribution 
    opened by ifsheldon 3
Releases(v0.8.0)
  • v0.8.0(Dec 28, 2022)

    Since last release:

    • A bug has been fix. The bug appears when after forward computation, if we update kernel extra args using set_kernel_extra_args (once or multiple times), backward computation is messed up due to inconsistent kernel inputs during forward and backward passes.
    • The APIs of Tin and EmptyTin have been changed: the constructors need auto_clear_grad specified, which is a reminder to users that gradients of fields must be taken care of carefully so to not have incorrect gradients after multiple runs of Tin or EmptyTin layers.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Sep 20, 2022)

    Nothing big has change in the code base of stannum, but since Taichi developers have delivered a long waited performance improvement, I want to urge everyone using stannum to update Taichi in use to 1.1.3. And some kind warning and documentation are added to help stannum users to understand this important upstream update.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.4(Aug 10, 2022)

  • v0.6.2(Mar 21, 2022)

    Introduced a configuration in Tube enable_backward. When enable_backward is False, Tube will eagerly recycle Taichi memory by destroying SNodeTree right after forward calculation. This should improve performance of forward-only calculations and should mitigate the memory problem of Taichi in forward-only mode.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Mar 8, 2022)

    • #7 is fixed because of upstream Taichi has fixed uninitialized memory problem in 0.9.1
    • Intermediate fields are now required to be batched if any input tensors are batched
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Feb 23, 2022)

    Persistent mode and Eager mode of Tube

    Before v0.5.0, the Taichi fields created in Tube is persistent and their lifetime is like: PyTorch upstream tensors -> Tube -> create fields -> forward pass -> copy values to downstream tensors -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed -> destroy fields

    They're so-called persistent fields as they persist when the compute graph is being constructed.

    Now in v0.5.0, we introduce an eager mode of Tube. With persistent_fields=False when instancing a Tube, eager mode is turned on, in which the lifetime of fields is like: PyTorch upstream tensors -> Tube -> fields -> copied values to downstream tensors -> destroy fields -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed

    Zooming in the optional backward pass, since we've destroyed fields that store values in the forward pass, we need to re-allocate new fields when calculating gradients, then the backward pass is like: Downstream gradients -> Tube -> create fields and load values -> load downstream gradients to fields -> backward pass -> copy gradients to tensors -> Destroy fields -> Upstream PyTorch gradient calculation

    This introduces some overhead but may be faster on "old" Taichi (any Taichi that does not merge https://github.com/taichi-dev/taichi/pull/4356). For details, please see this PR. At the time we release v0.5.0, stable Taichi does not merge this PR.

    Compatibility issue fixes

    At the time we release v0.5.0, Taichi has been being under refactoring heavily, so we introduced many small fixes to deal with incompatibilities caused by such refactoring. If you find compatibility issues, feel free to submit issues and make PRs.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.4(Feb 21, 2022)

    Fix many problems due to Taichi changes and bugs:

    • API import problems due to Taichi API changes
    • Memory uninit problem due to this https://github.com/taichi-dev/taichi/issues/4334 and this https://github.com/taichi-dev/taichi/issues/4016
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jan 14, 2022)

    Tube

    Tube is more flexible than Tin and slower in that it helps you create necessary fields and do automatic batching.

    Registrations

    All you need to do is to register:

    • Input/intermediate/output tensor shapes instead of fields
    • At least one kernel that takes the following as arguments
      • Taichi fields: correspond to tensors (may or may not require gradients)
      • (Optional) Extra arguments: will NOT receive gradients

    Acceptable dimensions of tensors to be registered:

    • None: means the flexible batch dimension, must be the first dimension e.g. (None, 2, 3, 4)
    • Positive integers: fixed dimensions with the indicated dimensionality
    • Negative integers:
      • -1: means any number [1, +inf), only usable in the registration of input tensors.
      • Negative integers < -1: indices of some dimensions that must be of the same dimensionality
        • Restriction: negative indices must be "declared" in the registration of input tensors first, then used in the registration of intermediate and output tensors.
        • Example 1: tensor a and b of shapes a: (2, -2, 3) and b: (-2, 5, 6) mean the dimensions of -2 must match.
        • Example 2: tensor a and b of shapes a: (-1, 2, 3) and b: (-1, 5, 6) mean no restrictions on the first dimensions.

    Registration order: Input tensors/intermediate fields/output tensors must be registered first, and then kernel.

    @ti.kernel
    def ti_add(arr_a: ti.template(), arr_b: ti.template(), output_arr: ti.template()):
        for i in arr_a:
            output_arr[i] = arr_a[i] + arr_b[i]
    
    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    a = torch.ones(10)
    b = torch.ones(10)
    tube = Tube(cpu) \
        .register_input_tensor((10,), torch.float32, "arr_a", False) \
        .register_input_tensor((10,), torch.float32, "arr_b", False) \
        .register_output_tensor((10,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    out = tube(a, b)
    

    When registering a kernel, a list of field/tensor names is required, for example, the above ["arr_a", "arr_b", "output_arr"]. This list should correspond to the fields in the arguments of a kernel (e.g. above ti_add()).

    The order of input tensors should match the input fields of a kernel.

    Automatic batching

    Automatic batching is done simply by running kernels batch times. The batch number is determined by the leading dimension of tensors of registered shape (None, ...).

    It's required that if any input tensors or intermediate fields are batched (which means they have registered the first dimension to be None), all output tensors must be registered as batched.

    Examples

    Simple one without negative indices or batch dimension:

    @ti.kernel
    def ti_add(arr_a: ti.template(), arr_b: ti.template(), output_arr: ti.template()):
        for i in arr_a:
            output_arr[i] = arr_a[i] + arr_b[i]
    
    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    a = torch.ones(10)
    b = torch.ones(10)
    tube = Tube(cpu) \
        .register_input_tensor((10,), torch.float32, "arr_a", False) \
        .register_input_tensor((10,), torch.float32, "arr_b", False) \
        .register_output_tensor((10,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    out = tube(a, b)
    

    With negative dimension index:

    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    tube = Tube(cpu) \
        .register_input_tensor((-2,), torch.float32, "arr_a", False) \
        .register_input_tensor((-2,), torch.float32, "arr_b", False) \
        .register_output_tensor((-2,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    dim = 10
    a = torch.ones(dim)
    b = torch.ones(dim)
    out = tube(a, b)
    assert torch.allclose(out, torch.full((dim,), 2.))
    dim = 100
    a = torch.ones(dim)
    b = torch.ones(dim)
    out = tube(a, b)
    assert torch.allclose(out, torch.full((dim,), 2.))
    

    With batch dimension:

    @ti.kernel
    def int_add(a: ti.template(), b: ti.template(), out: ti.template()):
        out[None] = a[None] + b[None]
    
    ti.init(ti.cpu)
    b = torch.tensor(1., requires_grad=True)
    batched_a = torch.ones(10, requires_grad=True)
    tube = Tube() \
        .register_input_tensor((None,), torch.float32, "a") \
        .register_input_tensor((), torch.float32, "b") \
        .register_output_tensor((None,), torch.float32, "out", True) \
        .register_kernel(int_add, ["a", "b", "out"]) \
        .finish()
    out = tube(batched_a, b)
    loss = out.sum()
    loss.backward()
    assert torch.allclose(torch.ones_like(batched_a) + 1, out)
    assert b.grad == 10.
    assert torch.allclose(torch.ones_like(batched_a), batched_a.grad)
    

    For more invalid use examples, please see tests in tests/test_tube.

    Advanced field construction with FieldManager

    There is a way to tweak how fields are constructed in order to gain performance improvement in kernel calculations.

    By supplying a customized FieldManager when registering a field, you can construct a field however you want.

    Please refer to the code FieldManger in src/stannum/auxiliary.py for more information.

    If you don't know why constructing fields differently can improve performance, don't use this feature.

    If you don't know how to construct fields differently, please refer to Taichi field documentation.

    Source code(tar.gz)
    Source code(zip)
    stannum-0.4.0-py3-none-any.whl(15.81 KB)
  • v0.3.2(Jan 1, 2022)

  • v0.3.1(Dec 30, 2021)

    Fix a bug.

    Details: When some of input fields or internal fields do not need gradient (i.e. needs_grad==False), incorrect numbers of backward gradients will be passed to PyTorch Autograd, crashing back propagation.

    Source code(tar.gz)
    Source code(zip)
  • v0.3(Dec 30, 2021)

    New feature:

    • Add complex tensor support: need to specify that a field expects a complex tensor as data source
      tin_layer = Tin(data_oriented_vector_field, device) \
            .register_kernel(data_oriented_vector_field.forward_kernel, 1.0) \
            .register_input_field(data_oriented_vector_field.input_field, complex_dtype=True) \
            .register_output_field(data_oriented_vector_field.output_field, complex_dtype=True) \
            .register_internal_field(data_oriented_vector_field.multiplier) \
            .finish()
      

    Engineering:

    • Refactored code a bit
    • Add type hints to enhance code readability
    Source code(tar.gz)
    Source code(zip)
    stannum-0.3.0-py3-none-any.whl(7.00 KB)
    stannum-0.3.0.tar.gz(7.52 KB)
  • v0.2(Aug 1, 2021)

    Now you can register multiple kernels. These kernels will be called sequentially with the same order of registration. Please be noted that all fields needed to store intermediate results must be register.

    API changes:

    • Tin.register_weight_field() -> Tin.register_internal_field()
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Jul 14, 2021)

  • v0.1.2(Jul 13, 2021)

    Now you don't need to specify needs_grad when registering a field via .register_*_field(), as long as you use Taichi > 0.7.26. If you use a legacy version of Taichi, you must still specify needs_grad yourself, though.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jul 11, 2021)

  • v0.1(Jul 9, 2021)

Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (<a href=[email protected])"> 55 Oct 25, 2022
JumpDiff: Non-parametric estimator for Jump-diffusion processes for Python

jumpdiff jumpdiff is a python library with non-parametric Nadaraya─Watson estimators to extract the parameters of jump-diffusion processes. With jumpd

Rydin 28 Dec 10, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. The related paper is avai

26 Dec 13, 2022
PyTorch Implementation of SSTNs for hyperspectral image classifications from the IEEE T-GRS paper "Spectral-Spatial Transformer Network for Hyperspectral Image Classification: A FAS Framework."

PyTorch Implementation of SSTN for Hyperspectral Image Classification Paper links: SSTN published on IEEE T-GRS. Also, you can directly find the imple

Zilong Zhong 54 Dec 19, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
Aspect-Sentiment-Multiple-Opinion Triplet Extraction (NLPCC 2021)

The code and data for the paper "Aspect-Sentiment-Multiple-Opinion Triplet Extraction" Requirements Python 3.6.8 torch==1.2.0 pytorch-transformers==1.

慢半拍 5 Jul 02, 2022
Space Invaders For Python

Space-Invaders Just download or clone the git repository. To run the Space Invader game you need to have pyhton installed in you system. If you dont h

Fei 5 Jul 27, 2022
An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astronomy data.

EquivariantSelfAttention An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astro

2 Nov 09, 2021
PyTorch implementation of Neural Dual Contouring.

NDC PyTorch implementation of Neural Dual Contouring. Citation We are still writing the paper while adding more improvements and applications. If you

Zhiqin Chen 140 Dec 26, 2022
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-train

GMUM 90 Jan 08, 2023
Image Processing, Image Smoothing, Edge Detection and Transforms

opevcvdl-hw1 This project uses openCV and Qt to achieve the requirements. Version Python 3.7 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.1

Kenny Cheng 3 Aug 17, 2022
Generate indoor scenes with Transformers

SceneFormer: Indoor Scene Generation with Transformers Initial code release for the Sceneformer paper, contains models, train and test scripts for the

Chandan Yeshwanth 110 Dec 06, 2022
A modern pure-Python library for reading PDF files

pdf A modern pure-Python library for reading PDF files. The goal is to have a modern interface to handle PDF files which is consistent with itself and

6 Apr 06, 2022
Progressive Growing of GANs for Improved Quality, Stability, and Variation

Progressive Growing of GANs for Improved Quality, Stability, and Variation — Official TensorFlow implementation of the ICLR 2018 paper Tero Karras (NV

Tero Karras 5.9k Jan 05, 2023
Multi-task head pose estimation in-the-wild

Multi-task head pose estimation in-the-wild We provide C++ code in order to replicate the head-pose experiments in our paper https://ieeexplore.ieee.o

Roberto Valle 26 Oct 06, 2022
OpenDILab Multi-Agent Environment

Go-Bigger: Multi-Agent Decision Intelligence Environment GoBigger Doc (中文版) Ongoing 2021.11.13 We are holding a competition —— Go-Bigger: Multi-Agent

OpenDILab 441 Jan 05, 2023
Approaches to modeling terrain and maps in python

topography 🌎 Contains different approaches to modeling terrain and topographic-style maps in python Features Inverse Distance Weighting (IDW) A given

John Gutierrez 1 Aug 10, 2022
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)

NExT-QA We reproduce some SOTA VideoQA methods to provide benchmark results for our NExT-QA dataset accepted to CVPR2021 (with 1 'Strong Accept' and 2

Junbin Xiao 50 Nov 24, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023