Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

Overview

AutoInt: Automatic Integration for Fast Neural Volume Rendering
CVPR 2021

Project Page | Video | Paper

Open Colab
PyTorch implementation of automatic integration.
AutoInt: Automatic Integration for Fast Neural Volume Rendering
David B. Lindell*, Julien N. P. Martel*, Gordon Wetzstein
Stanford University
*denotes equal contribution
in CVPR 2021

Quickstart

To get started quickly, we provide a collab link above. Otherwise, you can clone this repo and follow the below instructions.

To setup a conda environment, download example training data, begin the training process, and launch Tensorboard:

conda env create -f environment.yml
conda activate autoint 
cd experiment_scripts
python train_1d_integral.py
tensorboard --logdir=../logs --port=6006

This example will fit a grad network to a 1D signal and evaluate the integral. You can monitor the training in your browser at localhost:6006. You can also train a network on the sparse tomography problem presented in the paper with python train_sparse_tomography.py.

Autoint for Neural Rendering

Automatic integration can be used to learn closed form solutions to the volume rendering equation, which is an integral equation accumulates transmittance and emittance along rays to render an image. While conventional neural renderers require hundreds of samples along each ray to evaluate these integrals (and hence hundreds of costly forward passes through a network), AutoInt allows evaluating these integrals far fewer forward passes.

Training

To run AutoInt for neural rendering, first set up the conda environment with

conda env create -f environment.yml
conda activate autoint 

Then, download the datasets to the data folder. We allow training on any of three datasets. The synthetic Blender data from NeRF and the LLFF scenes are hosted here. The DeepVoxels data are hosted here.

Finally, use the provided config files in the experiment_scripts/configs folder to train on these datasets. For example, to train on a NeRF Blender dataset, run the following

python train_autoint_radiance_field.py --config ./configs/config_blender_tiny.ini
tensorboard --logdir=../logs/ --port=6006

This will train a small, low-resolution scene. To train scenes at high-resolution (requires a few days of training time), use the config_blender.ini, config_deepvoxels.ini, or config_llff.ini config files.

Rendering

Rendering from a trained model can be done with the following command.

python train_autoint_radiance_field.py --config /path/to/config/file --render_model ../logs/path/to/log/directory <epoch number> --render_ouput /path/to/output/folder

Here, the --render_model command indicates the log directory where the code saves the models and checkpoints. For example, this would be ../logs/blender_lego for the default Blender dataset. Then, the epoch number can be found by looking at numbers of the the saved checkpoint filenames in ../logs/blender_lego/checkpoints/. Finally, --render_output should specify a folder where the output rendered images will be generated.

Citation

@inproceedings{autoint2021,
  title={AutoInt: Automatic Integration for Fast Neural Volume Rendering},
  author={David B. Lindell and Julien N. P. Martel and Gordon Wetzstein},
  year={2021},
  booktitle={Proc. CVPR},
}
Owner
Stanford Computational Imaging Lab
Next-generation computational imaging and display systems.
Stanford Computational Imaging Lab
VLGrammar: Grounded Grammar Induction of Vision and Language

VLGrammar: Grounded Grammar Induction of Vision and Language

Yining Hong 27 Dec 23, 2022
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

Simulated+Unsupervised (S+U) Learning in TensorFlow TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial T

Taehoon Kim 569 Dec 29, 2022
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

3 Feb 18, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Alex Volchek 1 Jan 03, 2022
Code implementation for the paper 'Conditional Gaussian PAC-Bayes'.

CondGauss This repository contains PyTorch code for the paper Stochastic Gaussian PAC-Bayes. A novel PAC-Bayesian training method is implemented. Ther

0 Nov 01, 2021
An experiment to bait a generalized frontrunning MEV bot

Honeypot 🍯 A simple experiment that: Creates a honeypot contract Baits a generalized fronturnning bot with a unique transaction Analyze bot behaviour

0x1355 14 Nov 24, 2022
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

Tom 50 Dec 16, 2022
Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information by Masato Tamura, Hiroki Ohashi, and Tomoaki Yosh

105 Dec 23, 2022
Minimalist Error collection Service compatible with Rollbar clients. Sentry or Rollbar alternative.

Minimalist Error collection Service Features Compatible with any Rollbar client(see https://docs.rollbar.com/docs). Just change the endpoint URL to yo

Haukur Rósinkranz 381 Nov 11, 2022
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 03, 2023
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo

Seg_Uncertainty In this repo, we provide the code for the two papers, i.e., MRNet:Unsupervised Scene Adaptation with Memory Regularization in vivo, IJ

Zhedong Zheng 348 Jan 05, 2023
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

3 Aug 03, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.

Rayvens augments Ray with events. With Rayvens, Ray applications can subscribe to event streams, process and produce events. Rayvens leverages Apache

CodeFlare 32 Dec 25, 2022
The Malware Open-source Threat Intelligence Family dataset contains 3,095 disarmed PE malware samples from 454 families

MOTIF Dataset The Malware Open-source Threat Intelligence Family (MOTIF) dataset contains 3,095 disarmed PE malware samples from 454 families, labeled

Booz Allen Hamilton 112 Dec 13, 2022
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 157 Dec 26, 2022
Air Quality Prediction Using LSTM

AirQualityPredictionUsingLSTM In this Repo, i present to you the winning solution of smart gujarat hackathon 2019 where the task was to predict the qu

Deepak Nandwani 2 Dec 13, 2022
This project intends to use SVM supervised learning to determine whether or not an individual is diabetic given certain attributes.

Diabetes Prediction Using SVM I explore a diabetes prediction algorithm using a Diabetes dataset. Using a Support Vector Machine for my prediction alg

Jeff Shen 1 Jan 14, 2022