Training and Evaluation Code for Neural Volumes

Overview

Neural Volumes

This repository contains training and evaluation code for the paper Neural Volumes. The method learns a 3D volumetric representation of objects & scenes that can be rendered and animated from only calibrated multi-view video.

Neural Volumes

Citing Neural Volumes

If you use Neural Volumes in your research, please cite the paper:

@article{Lombardi:2019,
 author = {Stephen Lombardi and Tomas Simon and Jason Saragih and Gabriel Schwartz and Andreas Lehrmann and Yaser Sheikh},
 title = {Neural Volumes: Learning Dynamic Renderable Volumes from Images},
 journal = {ACM Trans. Graph.},
 issue_date = {July 2019},
 volume = {38},
 number = {4},
 month = jul,
 year = {2019},
 issn = {0730-0301},
 pages = {65:1--65:14},
 articleno = {65},
 numpages = {14},
 url = {http://doi.acm.org/10.1145/3306346.3323020},
 doi = {10.1145/3306346.3323020},
 acmid = {3323020},
 publisher = {ACM},
 address = {New York, NY, USA},
}

File Organization

The root directory contains several subdirectories and files:

data/ --- custom PyTorch Dataset classes for loading included data
eval/ --- utilities for evaluation
experiments/ --- location of input data and training and evaluation output
models/ --- PyTorch modules for Neural Volumes
render.py --- main evaluation script
train.py --- main training script

Requirements

  • Python (3.6+)
    • PyTorch (1.2+)
    • NumPy
    • Pillow
    • Matplotlib
  • ffmpeg (in PATH, needed to render videos)

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

A sample set of input data is provided in the v0.1 release and can be downloaded here and extracted into the root directory of the repository. experiments/dryice1/data contains the input images and camera calibration data, and experiments/dryice1/experiment1 contains an example experiment configuration file (experiments/dryice1/experiment1/config.py).

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py Render

License

See the LICENSE file for details.

Comments
  • Training with our own data

    Training with our own data

    Hi,
    I have a few questions on how the data should be formatted and the data format of the provided dryice1.

    • The model expects world space coordinate in meters? i.e if my extrinsics are already in meters do I still need the world_scale=1/256. in config.py file?
    • The extrinsics are in world2cam and the rotation convention is like opencv? i.e, y-down,z-forward and x-right, assuming identity for pose.txt file?
    • how long do I need to train for about 200 frames? And in the config.py file it seems you are skipping some frames? This is ok to do for my own sequence as well?
    • in the KRT file, I see that there's 5 parameters above the RT matrix. This is the distortion correction in opencv format? But it is not used yes?
    • I did not visualize your cameras, so I am not sure how they are distributed. Is it gonna be a problem if I use 50 cameras equally distributed in a half-hemisphere and the subject is already at world origin and 3.5 meters from every cameras? My question is do I need to filter the training cameras so that the back side of subject that is not seen by input 3 cameras is excluded?
    • How do I choose the input cameras? I have a visualization of the cameras . Which camera config should I use? Is this more a question of which testing camera poses I intend to have, i.e narrower the testing cameras' range of view, the closer input training cameras can be? Config_0 is more orthogonal and Config_1 sees less of the backside.
    opened by zawlin 32
  • Some questions about coordination transformation

    Some questions about coordination transformation

    Hello, Thanks for releasing your code. I am impressed by your work. Now I hope to run your code with my our dataset. I have two questions.

    Firstly, I see the pose.txt is used in the code to put the objects in the center. If I use my own data, will the file still work?

    Secondly, I see the code set the raypos is among -1 and 1. Is it the matrix in this pose file that narrows the range to -1 to 1? My own dataset' range is different.

    Thirdly, does the code limit the scope of the template? Does it have to be between 0-255?

    Thanks a lot in advance!

    opened by maobenz 3
  • Location of the volume

    Location of the volume

    Hi there,

    I wonder whether the origin of the volume is (0,0,0)?

    I'm testing the method on a public dataset (http://people.csail.mit.edu/drdaniel/mesh_animation), and I know exactly where (0,0,0) is in the images. But the volume seems to float around the scene. This is the first preview for training process: prog_000001

    Each camera is pointing to the opposite side of the scene, so I expect the same for the volume location in images. But for some reason, they are on the same side in the images. Can you help?

    Thank you.

    opened by lochuynh1989 3
  • Any plan to release all data that presented in the paper?

    Any plan to release all data that presented in the paper?

    Hi @stephenlombardi ,

    Thanks for sharing this great work. I was wondering do you have any plan to release all the data that you used in the paper (apart from the dryice)?

    Best, Zirui

    opened by ziruiw-dev 2
  • Block-wise initialization scheme

    Block-wise initialization scheme

    Hi, is there any paper describing the used block-wise weight initialization scheme?

    https://github.com/facebookresearch/neuralvolumes/blob/8c5fad49b2b05b4b2e79917ee87299e7c1676d59/models/utils.py#L73

    opened by denkorzh 2
  • Is there a way to render a 3D file from this?

    Is there a way to render a 3D file from this?

    Hello, I was wondering if there is a way to export an .obj/,fbx file along with corresponding materials from this? If not, do you have any suggestions as to how to go about that if I were to try extend the code to incorporate that functionality?

    opened by arlorostirolla 1
  • How Can I train and render a Person Image

    How Can I train and render a Person Image

    Hi my name is Luan I am trying to render a Person Image but I am not being able to run can you create and for me a folder with the Setting setup to use a person image? Thank you.

    opened by LuanDalOrto 1
  • code for hybrid rendering (section 6.2) doesn't exist?

    code for hybrid rendering (section 6.2) doesn't exist?

    Hello,

    First of all, thank you for releasing the code for your seminal work. I really think neural volumes is one of the works that popularized differentiable rendering and inspired future works such as neural radiance fields.

    My question is whether this codebase includes the code for the hybrid rendering method outlined in section 6.2 of the paper. I'm trying to fit Neural Volumes to multi-view video of a full-body human being, similar to the 5th subfigure in Fig. 1 of the main paper, but after reading it more carefully it seems as though I would need to use hybrid rendering to be able to render the fine details of the human being.

    Could you

    1. confirm the existence of hybrid rendering in this codebase AND
    2. whether or not hybrid rendering was used to render the full-bodied human being in Fig. 1 of the main paper.

    Thank you in advance.

    opened by andrewsonga 1
  • Misaligned views in rendering

    Misaligned views in rendering

    Hi,

    I am working on MIT dataset to test the network. When I specify a camera to render, it looks fine throughout timeline. However, while rendering the rotating video, the cameras are misaligned as shown in attached screenshot. All cameras look like clustered at the center and views are spread around within the range cameras cover. Is it possible to be any error in KRT or configuration?

    Any suggestion is welcome. issue_MIT_5_cams

    opened by CorneliusHsiao 1
Releases(v0.1)
Owner
Meta Research
Meta Research
Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)

Neuron Merging: Compensating for Pruned Neurons Pytorch implementation of Neuron Merging: Compensating for Pruned Neurons, accepted at 34th Conference

Woojeong Kim 33 Dec 30, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
Global Filter Networks for Image Classification

Global Filter Networks for Image Classification Created by Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, Jie Zhou This repository contains PyTorch

Yongming Rao 273 Dec 26, 2022
Real-time ground filtering algorithm of cloud points acquired using Terrestrial Laser Scanner (TLS)

This repository contains tools to simulate the ground filtering process of a registered point cloud. The repository contains two filtering methods. The first method uses a normal vector, and fit to p

5 Aug 25, 2022
Epidemiology analysis package

zEpid zEpid is an epidemiology analysis package, providing easy to use tools for epidemiologists coding in Python 3.5+. The purpose of this library is

Paul Zivich 111 Jan 08, 2023
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
An Open-Source Toolkit for Prompt-Learning.

An Open-Source Framework for Prompt-learning. Overview • Installation • How To Use • Docs • Paper • Citation • What's New? Nov 2021: Now we have relea

THUNLP 2.3k Jan 07, 2023
Full-featured Decision Trees and Random Forests learner.

CID3 This is a full-featured Decision Trees and Random Forests learner. It can save trees or forests to disk for later use. It is possible to query tr

Alejandro Penate-Diaz 3 Aug 15, 2022
Code for the Lovász-Softmax loss (CVPR 2018)

The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks Maxim Berman, Amal Ranne

Maxim Berman 1.3k Jan 04, 2023
GeoTransformer - Geometric Transformer for Fast and Robust Point Cloud Registration

Geometric Transformer for Fast and Robust Point Cloud Registration PyTorch imple

Zheng Qin 220 Jan 05, 2023
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
Pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021).

Pytorch code for SS-Net This is a pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021). Environment Code is tested

Sun Ran 1 May 18, 2022
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (陈三元) 81 Nov 28, 2022
QuakeLabeler is a Python package to create and manage your seismic training data, processes, and visualization in a single place — so you can focus on building the next big thing.

QuakeLabeler Quake Labeler was born from the need for seismologists and developers who are not AI specialists to easily, quickly, and independently bu

Hao Mai 15 Nov 04, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Public Implementation of ChIRo from "Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations"

Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations This directory contains the model architectures and experimental

35 Dec 05, 2022
a dnn ai project to classify which food people are eating on audio recordings

Deep Learning - EAT Challenge About This project is part of an AI challenge of the DeepLearning course 2021 at the University of Augsburg. The objecti

Marco Tröster 1 Oct 24, 2021
Official PyTorch implementation of "Evolving Search Space for Neural Architecture Search"

Evolving Search Space for Neural Architecture Search Usage Install all required dependencies in requirements.txt and replace all ..path/..to in the co

Yuanzheng Ci 10 Oct 24, 2022