Fast Discounted Cumulative Sums in PyTorch

Overview

TODO: update this README!

Fast Discounted Cumulative Sums in PyTorch

PyPiVersion PythonVersion PyPiDownloads License License: CC BY 4.0 Twitter Follow

This repository implements an efficient parallel algorithm for the computation of discounted cumulative sums and a Python package with differentiable bindings to PyTorch. The discounted cumsum operation is frequently seen in data science domains concerned with time series, including Reinforcement Learning (RL).

The traditional sequential algorithm performs the computation of the output elements in a loop. For an input of size N, it requires O(N) operations and takes O(N) time steps to complete.

The proposed parallel algorithm requires a total of O(N log N) operations, but takes only O(log N) time steps, which is a considerable trade-off in many applications involving large inputs.

Features of the parallel algorithm:

  • Speed logarithmic in the input size
  • Better numerical precision than sequential algorithms

Features of the package:

  • CPU: sequential algorithm in C++
  • GPU: parallel algorithm in CUDA
  • Gradients computation wrt input
  • Both left and right directions of summation supported
  • PyTorch bindings

Usage

Installation

pip install torch-discounted-cumsum

API

  • discounted_cumsum_right: Computes discounted cumulative sums to the right of each position (a standard setting in RL)
  • discounted_cumsum_left: Computes discounted cumulative sums to the left of each position

Example

import torch
from torch_discounted_cumsum import discounted_cumsum_right

N = 8
gamma = 0.99
x = torch.ones(1, N).cuda()
y = discounted_cumsum_right(x, gamma)

print(y)

Output:

tensor([[7.7255, 6.7935, 5.8520, 4.9010, 3.9404, 2.9701, 1.9900, 1.0000]],
       device='cuda:0')

Up to K elements

import torch
from torch_discounted_cumsum import discounted_cumsum_right

N = 8
K = 2
gamma = 0.99
x = torch.ones(1, N).cuda()
y_N = discounted_cumsum_right(x, gamma)
y_K = y_N - (gamma ** K) * torch.cat((y_N[:, K:], torch.zeros(1, K).cuda()), dim=1)

print(y_K)

Output:

tensor([[1.9900, 1.9900, 1.9900, 1.9900, 1.9900, 1.9900, 1.9900, 1.0000]],
       device='cuda:0')

Parallel Algorithm

For the sake of simplicity, the algorithm is explained for N=16. The processing is performed in-place in the input vector in log2 N stages. Each stage updates N / 2 positions in parallel (that is, in a single time step, provided unrestricted parallelism). A stage is characterized by the size of the group of sequential elements being updated, which is computed as 2 ^ (stage - 1). The group stride is always twice larger than the group size. The elements updated during the stage are highlighted with the respective stage color in the figure below. Here input elements are denoted with their position id in hex, and the elements tagged with two symbols indicate the range over which the discounted partial sum is computed upon stage completion.

Each element update includes an in-place addition of a discounted element, which follows the last updated element in the group. The discount factor is computed as gamma raised to the power of the distance between the updated and the discounted elements. In the figure below, this operation is denoted with tilted arrows with a greek gamma tag. After the last stage completes, the output is written in place of the input.

In the CUDA implementation, N / 2 CUDA threads are allocated during each stage to update the respective elements. The strict separation of updates into stages via separate kernel invocations guarantees stage-level synchronization and global consistency of updates.

The gradients wrt input can be obtained from the gradients wrt output by simply taking cumsum operation with the reversed direction of summation.

Numerical Precision

The parallel algorithm produces a more numerically-stable output than the sequential algorithm using the same scalar data type.

The comparison is performed between 3 runs with identical inputs (code). The first run casts inputs to double precision and obtains the output reference using the sequential algorithm. Next, we run both sequential and parallel algorithms with the same inputs cast to single precision and compare the results to the reference. The comparison is performed using the L_inf norm, which is just the maximum of per-element discrepancies.

With 10000-element non-zero-centered input (such as all elements are 1.0), the errors of the algorithms are 2.8e-4 (sequential) and 9.9e-5 (parallel). With zero-centered inputs (such as standard gaussian noise), the errors are 1.8e-5 (sequential) and 1.5e-5 (parallel).

Speed-up

We tested 3 implementations of the algorithm with the same 100000-element input (code):

  1. Sequential in PyTorch on CPU (as in REINFORCE) (Intel Xeon CPU, DGX-1)
  2. Sequential in C++ on CPU (Intel Xeon CPU, DGX-1)
  3. Parallel in CUDA (NVIDIA P-100, DGX-1)

The observed speed-ups are as follows:

  • PyTorch to C++: 387 times
  • PyTorch to CUDA: 36573 times
  • C++ to CUDA: 94 times

Ops-Space-Time Complexity

Assumptions:

  • A fused operation of raising gamma to a power, multiplying the result by x, and adding y is counted as a single fused operation;
  • N is a power of two. When it isn't, the parallel algorithm's complexity is the same as with N equal to the next power of two.

Under these assumptions, the sequential algorithm takes N operations and N time steps to complete. The parallel algorithm takes 0.5 * N * log2 N operations and can be completed in log2 N time steps if the parallelism is unrestricted.

Both algorithms can be performed in-place; hence their space complexity is O(1).

In Other Frameworks

PyTorch

As of the time of writing, PyTorch does not provide discounted cumsum functionality via the API. PyTorch RL code samples (e.g., REINFORCE) suggest computing returns in a loop over reward items. Since most RL algorithms do not require differentiating through returns, many code samples resort to using SciPy function listed below.

TensorFlow

TensorFlow API provides tf.scan API, which can be supplied with an appropriate lambda function implementing the formula above. Under the hood, however, tf.scan implement the traditional sequential algorithm.

SciPy

SciPy provides a scipy.signal.lfilter function for computing IIR filter response using the sequential algorithm, which can be used for the task at hand, as suggested in this StackOverflow response.

Citation

To cite this repository, use the following BibTeX:

@misc{obukhov2021torchdiscountedcumsum,
  author={Anton Obukhov},
  year=2021,
  title={Fast discounted cumulative sums in PyTorch},
  url={https://github.com/toshas/torch-discounted-cumsum}
}
Owner
Daniel Povey
Daniel Povey
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Phil Wang 900 Dec 22, 2022
Over9000 optimizer

Optimizers and tests Every result is avg of 20 runs. Dataset LR Schedule Imagenette size 128, 5 epoch Imagewoof size 128, 5 epoch Adam - baseline OneC

Mikhail Grankin 405 Nov 27, 2022
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Henrique Morimitsu 105 Dec 16, 2022
A very simple and small path tracer written in pytorch meant to be run on the GPU

MentisOculi Pytorch Path Tracer A very simple and small path tracer written in pytorch meant to be run on the GPU Why use pytorch and not some other c

Matthew B. Mirman 222 Dec 01, 2022
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 04, 2023
Riemannian Adaptive Optimization Methods with pytorch optim

geoopt Manifold aware pytorch.optim. Unofficial implementation for “Riemannian Adaptive Optimization Methods” ICLR2019 and more. Installation Make sur

642 Jan 03, 2023
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

micrograd A tiny Autograd engine (with a bite! :)). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural

Andrej 3.5k Jan 08, 2023
PyTorch to TensorFlow Lite converter

PyTorch to TensorFlow Lite converter

Omer Ferhat Sarioglu 140 Dec 13, 2022
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

Torchmeta A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning bench

Tristan Deleu 1.7k Jan 06, 2023
An optimizer that trains as fast as Adam and as good as SGD.

AdaBound An optimizer that trains as fast as Adam and as good as SGD, for developing state-of-the-art deep learning models on a wide variety of popula

LoLo 2.9k Dec 27, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

GCL: Graph Contrastive Learning Library for PyTorch 592 Jan 07, 2023
You like pytorch? You like micrograd? You love tinygrad! ❤️

For something in between a pytorch and a karpathy/micrograd This may not be the best deep learning framework, but it is a deep learning framework. Due

George Hotz 9.7k Jan 05, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Terence Parr 704 Dec 14, 2022
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023
Pytorch bindings for Fortran

Pytorch bindings for Fortran

Dmitry Alexeev 46 Dec 29, 2022
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.

PyTorch Implementation of Differentiable SDE Solvers This library provides stochastic differential equation (SDE) solvers with GPU support and efficie

Google Research 1.2k Jan 04, 2023
3D-RETR: End-to-End Single and Multi-View3D Reconstruction with Transformers

3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers (BMVC 2021) Zai Shi*, Zhao Meng*, Yiran Xing, Yunpu Ma, Roger Wattenhofe

Zai Shi 36 Dec 21, 2022