Tree Nested PyTorch Tensor Lib

Overview

DI-treetensor

PyPI PyPI - Python Version Loc Comments

Docs Deploy Code Test Badge Creation Package Release codecov

GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors.

Almost all the operation can be supported in form of trees in a convenient way to simplify the structure processing when the calculation is tree-based.

Installation

You can simply install it with pip command line from the official PyPI site.

pip install di-treetensor

For more information about installation, you can refer to Installation.

Documentation

The detailed documentation are hosted on https://opendilab.github.io/DI-treetensor.

Only english version is provided now, the chinese documentation is still under development.

Quick Start

You can easily create a tree value object based on FastTreeValue.

import builtins
import os
from functools import partial

import treetensor.torch as torch

print = partial(builtins.print, sep=os.linesep)

if __name__ == '__main__':
    # create a tree tensor
    t = torch.randn({'a': (2, 3), 'b': {'x': (3, 4)}})
    print(t)
    print(torch.randn(4, 5))  # create a normal tensor
    print()

    # structure of tree
    print('Structure of tree')
    print('t.a:', t.a)  # t.a is a native tensor
    print('t.b:', t.b)  # t.b is a tree tensor
    print('t.b.x', t.b.x)  # t.b.x is a native tensor
    print()

    # math calculations
    print('Math calculation')
    print('t ** 2:', t ** 2)
    print('torch.sin(t).cos()', torch.sin(t).cos())
    print()

    # backward calculation
    print('Backward calculation')
    t.requires_grad_(True)
    t.std().arctan().backward()
    print('grad of t:', t.grad)
    print()

    # native operation
    # all the ops can be used as the original usage of `torch`
    print('Native operation')
    print('torch.sin(t.a)', torch.sin(t.a))  # sin of native tensor

The result should be

<Tensor 0x7f0dae602760>
├── a --> tensor([[-1.2672, -1.5817, -0.3141],
│                 [ 1.8107, -0.1023,  0.0940]])
└── b --> <Tensor 0x7f0dae602820>
    └── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
                      [ 1.5956,  0.8825, -0.5702, -0.2247],
                      [ 0.9235,  0.4538,  0.8775, -0.2642]])

tensor([[-0.9559,  0.7684,  0.2682, -0.6419,  0.8637],
        [ 0.9526,  0.2927, -0.0591,  1.2804, -0.2455],
        [ 0.4699, -0.9998,  0.6324, -0.6885,  1.1488],
        [ 0.8920,  0.4401, -0.7785,  0.5931,  0.0435]])

Structure of tree
t.a:
tensor([[-1.2672, -1.5817, -0.3141],
        [ 1.8107, -0.1023,  0.0940]])
t.b:
<Tensor 0x7f0dae602820>
└── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
                  [ 1.5956,  0.8825, -0.5702, -0.2247],
                  [ 0.9235,  0.4538,  0.8775, -0.2642]])

t.b.x
tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
        [ 1.5956,  0.8825, -0.5702, -0.2247],
        [ 0.9235,  0.4538,  0.8775, -0.2642]])

Math calculation
t ** 2:
<Tensor 0x7f0dae602eb0>
├── a --> tensor([[1.6057, 2.5018, 0.0986],
│                 [3.2786, 0.0105, 0.0088]])
└── b --> <Tensor 0x7f0dae60c040>
    └── x --> tensor([[1.4943, 0.1187, 0.9960, 0.1669],
                      [2.5458, 0.7789, 0.3252, 0.0505],
                      [0.8528, 0.2059, 0.7699, 0.0698]])

torch.sin(t).cos()
<Tensor 0x7f0dae621910>
├── a --> tensor([[0.5782, 0.5404, 0.9527],
│                 [0.5642, 0.9948, 0.9956]])
└── b --> <Tensor 0x7f0dae6216a0>
    └── x --> tensor([[0.5898, 0.9435, 0.6672, 0.9221],
                      [0.5406, 0.7163, 0.8578, 0.9753],
                      [0.6983, 0.9054, 0.7185, 0.9661]])


Backward calculation
grad of t:
<Tensor 0x7f0dae60c400>
├── a --> tensor([[-0.0435, -0.0535, -0.0131],
│                 [ 0.0545, -0.0064, -0.0002]])
└── b --> <Tensor 0x7f0dae60cbe0>
    └── x --> tensor([[ 0.0357, -0.0141, -0.0349, -0.0162],
                      [ 0.0476,  0.0249, -0.0213, -0.0103],
                      [ 0.0262,  0.0113,  0.0248, -0.0116]])


Native operation
torch.sin(t.a)
tensor([[-0.9543, -0.9999, -0.3089],
        [ 0.9714, -0.1021,  0.0939]], grad_fn=<SinBackward>)

For more quick start explanation and further usage, take a look at:

Extension

If you need to translate treevalue object to runnable source code, you may use the potc-treevalue plugin with the installation command below

pip install DI-treetensor[potc]

In potc, you can translate the objects to runnable python source code, which can be loaded to objects afterwards by the python interpreter, like the following graph

potc_system

For more information, you can refer to

Contribution

We appreciate all contributions to improve DI-treetensor, both logic and system designs. Please refer to CONTRIBUTING.md for more guides.

And users can join our slack communication channel, or contact the core developer HansBug for more detailed discussion.

License

DI-treetensor released under the Apache 2.0 license.

You might also like...
 Pretty Tensor - Fluent Neural Networks in TensorFlow
Pretty Tensor - Fluent Neural Networks in TensorFlow

Pretty Tensor provides a high level builder API for TensorFlow. It provides thin wrappers on Tensors so that you can easily build multi-layer neural networks.

A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

 (Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Code to reproduce the results in the paper
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility functions that allow writing model-based RL algorithms with only a few lines of code.

OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University
Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University

THU模式识别2021春 -- Jittor 医学图像分割 模型列表 本仓库收录了课程作业中同学们采用jittor框架实现的如下模型: UNet SegNet DeepLab V2 DANet EANet HarDNet及其改动HarDNet_alter PSPNet OCNet OCRNet DL

Comments
  • PyTorch OP List(P0)

    PyTorch OP List(P0)

    reference: https://pytorch.org/docs/1.8.0/torch.html

    common

    • [x] numel
    • [x] cpu
    • [x] cuda
    • [x] to

    Creation Ops

    • [x] torch.zeros_like
    • [x] torch.randn_like
    • [x] torch.randint_like
    • [x] torch.ones_like
    • [x] torch.full_like
    • [x] torch.empty_like
    • [x] torch.zeros
    • [x] torch.randn
    • [x] torch.randint
    • [x] torch.ones
    • [x] torch.full
    • [x] torch.empty

    Indexing, Slicing, Joining, Mutating Ops

    • [x] cat
    • [x] chunk
    • [ ] gather
    • [x] index_select
    • [x] masked_select
    • [x] reshape
    • [ ] scatter
    • [x] split
    • [x] squeeze
    • [x] stack
    • [ ] tile
    • [ ] unbind
    • [x] unsqueeze
    • [x] where

    Math Ops

    Pointwise Ops
    • [x] add
    • [x] sub
    • [x] mul
    • [x] div
    • [x] pow
    • [x] neg
    • [x] abs
    • [x] sign
    • [x] floor
    • [x] ceil
    • [x] round
    • [x] sigmoid
    • [x] clamp
    • [x] exp
    • [x] exp2
    • [x] sqrt
    • [x] log
    • [x] log10
    • [x] log2
    Reduction Ops
    • [ ] argmax
    • [ ] argmin
    • [x] all
    • [x] any
    • [x] max
    • [x] min
    • [x] dist
    • [ ] logsumexp
    • [x] mean
    • [ ] median
    • [x] norm
    • [ ] prod
    • [x] std
    • [x] sum
    • [ ] unique
    Comparison Ops
    • [ ] argsort
    • [x] eq
    • [x] ge
    • [x] gt
    • [x] isfinite
    • [x] isinf
    • [x] isnan
    • [x] le
    • [x] lt
    • [x] ne
    • [ ] sort
    • [ ] topk
    Other Ops
    • [ ] cdist
    • [x] clone
    • [ ] flip

    BLAS and LAPACK Ops

    • [ ] addbmm
    • [ ] addmm
    • [ ] bmm
    • [x] dot
    • [x] matmul
    • [x] mm
    enhancement 
    opened by PaParaZz1 3
  • PyTorch OP Doc List

    PyTorch OP Doc List

    P0

    • [x] cpu
    • [x] cuda
    • [x] to
    • [x] torch.zeros_like
    • [x] torch.randn_like
    • [x] torch.ones_like
    • [x] torch.zeros
    • [x] torch.randn
    • [x] torch.randint
    • [x] torch.ones
    • [x] cat
    • [x] reshape
    • [x] split
    • [x] squeeze
    • [x] stack
    • [x] unsqueeze
    • [x] where
    • [x] abs
    • [x] add
    • [x] clamp
    • [x] div
    • [x] exp
    • [x] log
    • [x] sqrt
    • [x] sub
    • [x] sigmoid
    • [x] pow
    • [x] mul
    • [ ] argmax
    • [ ] argmin
    • [x] all
    • [x] any
    • [x] max
    • [x] min
    • [x] dist
    • [x] mean
    • [x] std
    • [x] sum
    • [x] eq
    • [x] ge
    • [x] gt
    • [x] le
    • [x] lt
    • [x] ne
    • [x] clone
    • [x] dot
    • [x] matmul
    • [x] mm

    P1

    • [x] numel
    • [x] torch.randint_like
    • [x] torch.full_like
    • [x] torch.empty_like
    • [x] torch.full
    • [x] torch.empty
    • [x] chunk
    • [ ] gather
    • [x] index_select
    • [x] masked_select
    • [ ] scatter
    • [ ] tile
    • [ ] unbind
    • [x] ceil
    • [x] exp2
    • [x] floor
    • [x] log10
    • [x] log2
    • [x] neg
    • [x] round
    • [x] sign
    • [ ] bmm

    P2

    • [ ] logsumexp
    • [ ] median
    • [x] norm
    • [ ] prod
    • [ ] unique
    • [ ] argsort
    • [x] isfinite
    • [x] isinf
    • [x] isnan
    • [ ] sort
    • [ ] topk
    • [ ] cdist
    • [ ] flip
    • [ ] addbmm
    • [ ] addmm
    opened by PaParaZz1 2
  • dev(hansbug): add stream support for paralleling the calculations in tree

    dev(hansbug): add stream support for paralleling the calculations in tree

    Here is an example:

    import time
    
    import numpy as np
    import torch
    
    import treetensor.torch as ttorch
    
    N, M, T = 200, 2, 50
    S1, S2, S3 = 512, 1024, 2048
    
    
    def test_min():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N // M)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N // M)}, device='cuda')
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_native():
        a = {f'a{i}': torch.randn(S1, S2, device='cuda') for i in range(N)}
        b = {f'a{i}': torch.randn(S2, S3, device='cuda') for i in range(N)}
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            for key in a.keys():
                _ = torch.matmul(a[key], b[key])
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_linear():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N)}, device='cuda')
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_stream():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N)}, device='cuda')
    
        ttorch.stream(M)
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def warmup():
        # warm up
        a = torch.randn(1024, 1024).cuda()
        b = torch.randn(1024, 1024).cuda()
        for _ in range(20):
            c = torch.matmul(a, b)
    
    
    if __name__ == '__main__':
        warmup()
        test_min()
        test_native()
        test_linear()
        test_stream()
    
    

    不过讲真,这个stream实际效果挺脆弱的,非常看tensor尺寸,大了小了都不行,GPU性能不够也不行,一弄不好还容易负优化,总之挺难伺候的。这部分如果想实用化的话得再研究研究。

    enhancement 
    opened by HansBug 1
  • Failure when try to convert between numpy and torch on Windows Python3.10

    Failure when try to convert between numpy and torch on Windows Python3.10

    See here: https://github.com/opendilab/DI-treetensor/runs/7820313811?check_suite_focus=true

    The bug is like

        @method_treelize(return_type=_get_tensor_class)
        def tensor(self: numpy.ndarray, *args, **kwargs):
    >       tensor_: torch.Tensor = torch.from_numpy(self)
    E       RuntimeError: Numpy is not available
    

    The only way I found to 'solve' this is to downgrade python to version3.9 to lower. So these tests will be skipped temporarily.

    bug 
    opened by HansBug 0
Releases(v0.4.0)
  • v0.4.0(Aug 14, 2022)

    What's Changed

    • dev(hansbug): remove support for py3.6 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/12
    • pytorch upgrade to 1.12 by @zjowowen in https://github.com/opendilab/DI-treetensor/pull/11
    • dev(hansbug): add test for torch1.12.0 and python3.10 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/13
    • dev(hansbug): add stream support for paralleling the calculations in tree by @HansBug in https://github.com/opendilab/DI-treetensor/pull/10

    New Contributors

    • @zjowowen made their first contribution in https://github.com/opendilab/DI-treetensor/pull/11

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jul 15, 2022)

    What's Changed

    • dev(hansbug): use newer version of treevalue 1.4.1 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/9

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.2.1...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Mar 22, 2022)

    What's Changed

    • fix(hansbug): fix uncompitable problem with walk by @HansBug in https://github.com/opendilab/DI-treetensor/pull/5
    • dev(hansbug): add tensor method for treetensor.numpy.ndarray by @HansBug in https://github.com/opendilab/DI-treetensor/pull/6
    • fix(hansbug): add subside support to all the functions. by @HansBug in https://github.com/opendilab/DI-treetensor/pull/7
    • doc(hansbug): add documentation for np.stack, np.split and other 3 functions. by @HansBug in https://github.com/opendilab/DI-treetensor/pull/8
    • release(hansbug): use version 0.2.1 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/4

    New Contributors

    • @HansBug made their first contribution in https://github.com/opendilab/DI-treetensor/pull/5

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 4, 2022)

    • Use newer version of treevalue>=1.2.0
    • Add support of torch 1.10.0
    • Add support of potc

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 26, 2021)

  • v0.0.1(Sep 30, 2021)

Owner
OpenDILab
Open sourced Decision Intelligence (DI)
OpenDILab
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

Facebook Research 338 Dec 29, 2022
Python implementation of Project Fluent

Project Fluent This is a collection of Python packages to use the Fluent localization system. python-fluent consists of these packages: fluent.syntax

Project Fluent 155 Dec 28, 2022
Image De-raining Using a Conditional Generative Adversarial Network

Image De-raining Using a Conditional Generative Adversarial Network [Paper Link] [Project Page] He Zhang, Vishwanath Sindagi, Vishal M. Patel In this

He Zhang 216 Dec 18, 2022
CMP 414/765 course repository for Spring 2022 semester

CMP414/765: Artificial Intelligence Spring2021 This is the GitHub repository for course CMP 414/765: Artificial Intelligence taught at The City Univer

ch00226855 4 May 16, 2022
Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Real-ESRGAN Colab Demo for Real-ESRGAN . Portable Windows executable file. You can find more information here. Real-ESRGAN aims at developing Practica

Xintao 17.2k Jan 02, 2023
G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)

Single Node Injection Attack against Graph Neural Networks This repository is our Pytorch implementation of our paper: Single Node Injection Attack ag

Shuchang Tao 18 Nov 21, 2022
Random Walk Graph Neural Networks

Random Walk Graph Neural Networks This repository is the official implementation of Random Walk Graph Neural Networks. Requirements Code is written in

Giannis Nikolentzos 38 Jan 02, 2023
A crash course in six episodes for software developers who want to become machine learning practitioners.

Featured code sample tensorflow-planespotting Code from the Google Cloud NEXT 2018 session "Tensorflow, deep learning and modern convnets, without a P

Google Cloud Platform 2.6k Jan 08, 2023
A simple python library for fast image generation of people who do not exist.

Random Face A simple python library for fast image generation of people who do not exist. For more details, please refer to the [paper](https://arxiv.

Sergei Belousov 170 Dec 15, 2022
Virtual hand gesture mouse using a webcam

NonMouse 日本語のREADMEはこちら This is an application that allows you to use your hand itself as a mouse. The program uses a web camera to recognize your han

Yuki Takeyama 55 Jan 01, 2023
PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).

Scaffold-Federated-Learning PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020). Environment numpy=

KI 30 Dec 29, 2022
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
Block Sparse movement pruning

Movement Pruning: Adaptive Sparsity by Fine-Tuning Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; ho

Hugging Face 54 Dec 20, 2022
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

Liming Jiang 238 Nov 25, 2022
Vit-ImageClassification - Pytorch ViT for Image classification on the CIFAR10 dataset

Vit-ImageClassification Introduction This project uses ViT to perform image clas

Kaicheng Yang 4 Jun 01, 2022
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 01, 2023
Jiminy Cricket Environment (NeurIPS 2021)

Jiminy Cricket This is the repository for "What Would Jiminy Cricket Do? Towards Agents That Behave Morally" by Dan Hendrycks*, Mantas Mazeika*, Andy

Dan Hendrycks 15 Aug 29, 2022
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
Unofficial implementation of Google "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization" in PyTorch

CutPaste CutPaste: image from paper Unofficial implementation of Google's "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization"

Lilit Yolyan 59 Nov 27, 2022
🔮 Execution time predictions for deep neural network training iterations across different GPUs.

Habitat: A Runtime-Based Computational Performance Predictor for Deep Neural Network Training Habitat is a tool that predicts a deep neural network's

Geoffrey Yu 44 Dec 27, 2022