NPBG++: Accelerating Neural Point-Based Graphics

Related tags

Deep Learningnpbgpp
Overview

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics

Project Page | Paper

This repository contains the official Python implementation of the paper.

The repository also contains faithful implementation of NPBG.

We introduce the pipelines working with following datasets: ScanNet, NeRF-Synthetic, H3DS, DTU.

We follow the PyTorch3D convention for coordinate systems and cameras.

Changelog

  • [April 27, 2022] Added more example data and point clouds
  • [April 5, 2022] Initial code release

Dependencies

python -m venv ~/.venv/npbgplusplus
source ~/.venv/npbgplusplus/bin/activate
pip install -r requirements.txt

# install pytorch3d
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0
pip install "git+https://github.com/facebookresearch/[email protected]" --no-cache-dir --verbose

# install torch_scatter (2.0.8)
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.1+${CUDA}.html
# where ${CUDA} should be replaced by either cpu, cu101, cu102, or cu111 depending on your PyTorch installation.
# {CUDA} must match with torch.version.cuda (not with runtime or driver version)
# using 1.7.1 instead of 1.7.0 produces "incompatible cuda version" error

python setup.py build develop

Below you can see the examples on how to run the particular stages of different models on different datasets.

How to run NPBG++

Checkpoints and example data are available here.

Run training
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_scannet datasets=scannet_pretrain datasets.n_point=6e6 system=npbgpp_sphere system.visibility_scale=0.5 trainer.max_epochs=39 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_nerf datasets=nerf_blender_pretrain system=npbgpp_sphere system.visibility_scale=1.0 trainer.max_epochs=24 dataloader.train_data_mode=each weights_path=experiments/npbgpp_scannet/checkpoints/epoch38.ckpt
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_h3ds datasets=h3ds_pretrain system=npbgpp_sphere system.visibility_scale=1.0 trainer.max_epochs=24 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 weights_path=experiments/npbgpp_scannet/checkpoints/epoch38.ckpt
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_dtu datasets=dtu_pretrain system=npbgpp_sphere system.visibility_scale=1.0 trainer.max_epochs=36 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1  weights_path=experiments/npbgpp_scannet/checkpoints/epoch38.ckpt
Run testing
python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_eval_scan118 datasets=dtu_one_scene datasets.data_root=$\{hydra:runtime.cwd\}/example/DTU_masked datasets.scene_name=scan118 system=npbgpp_sphere system.visibility_scale=1.0 weights_path=./checkpoints/npbgpp_dtu_nm_mvs_ft_epoch35.ckpt eval_only=true dataloader=small
Run finetuning of coefficients
python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_5ae021f2805c0854_ft datasets=h3ds_one_scene datasets.data_root=$\{hydra:runtime.cwd\}/example/H3DS datasets.selection_count=0 datasets.train_num_samples=2000 datasets.train_image_size=null datasets.train_random_shift=false datasets.train_random_zoom=[0.5,2.0] datasets.scene_name=5ae021f2805c0854 system=coefficients_ft system.max_points=1e6 system.descriptors_save_dir=$\{hydra:run.dir\}/descriptors trainer.max_epochs=20 system.descriptors_pretrained_dir=experiments/npbgpp_eval_5ae021f2805c0854/descriptors weights_path=$\{hydra:runtime.cwd\}/checkpoints/npbgpp_h3ds.ckpt dataloader=small
Run testing with finetuned coefficients
python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_5ae021f2805c0854_test datasets=h3ds_one_scene datasets.data_root=$\{hydra:runtime.cwd\}/example/H3DS datasets.selection_count=0 datasets.scene_name=5ae021f2805c0854 system=coefficients_ft system.max_points=1e6 system.descriptors_save_dir=$\{hydra:run.dir\}/descriptors system.descriptors_pretrained_dir=experiments/npbgpp_5ae021f2805c0854_ft/descriptors weights_path=experiments/npbgpp_5ae021f2805c0854_ft/checkpoints/last.ckpt dataloader=small eval_only=true

How to run NPBG

Run pretraining
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_scannet datasets=scannet_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_scannet/result/descriptors trainer.max_epochs=39 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=11e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_nerf datasets=nerf_blender_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_nerf/result/descriptors trainer.max_epochs=24 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=4e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_h3ds datasets=h3ds_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=null datasets.train_random_shift=false datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_h3ds/result/descriptors trainer.max_epochs=24 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=3e6  # Submitted batch job 1175175
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_dtu_nm datasets=dtu_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_dtu_nm/result/descriptors trainer.max_epochs=36 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=3e6
Run fine-tuning on 1 scene
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_scannet_0045 datasets=scannet_one_scene datasets.scene_name=scene0045_00 datasets.n_point=6e6 datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_scannet_0045/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_scannet/result/checkpoints/epoch38.ckpt system.max_points=6e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_nerf_hotdog datasets=nerf_blender_one_scene datasets.scene_name=hotdog datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=npbgplusplus/experiments/npbg_nerf_hotdog/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_nerf/result/checkpoints/epoch23.ckpt system.max_points=4e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_h3ds_5ae021f2805c0854 datasets=h3ds_one_scene datasets.scene_name=5ae021f2805c0854 datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=null datasets.train_random_shift=false datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_h3ds_5ae021f2805c0854/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_h3ds/result/checkpoints/epoch23.ckpt system.max_points=3e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_dtu_nm_scan110 datasets=dtu_one_scene datasets.scene_name=scan110 datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_dtu_nm_scan110/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_dtu_nm/result/checkpoints/epoch35.ckpt system.max_points=3e6

Citation

If you find our work useful in your research, please consider citing:

@article{rakhimov2022npbg++,
  title={NPBG++: Accelerating Neural Point-Based Graphics},
  author={Rakhimov, Ruslan and Ardelean, Andrei-Timotei and Lempitsky, Victor and Burnaev, Evgeny},
  journal={arXiv preprint arXiv:2203.13318},
  year={2022}
}

License

See the LICENSE for more details.

Owner
Ruslan Rakhimov
Ruslan Rakhimov
[CVPR 2022 Oral] Crafting Better Contrastive Views for Siamese Representation Learning

Crafting Better Contrastive Views for Siamese Representation Learning (CVPR 2022 Oral) 2022-03-29: The paper was selected as a CVPR 2022 Oral paper! 2

249 Dec 28, 2022
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)

Introduction Pytorch implementation of Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Expert. | paper Song Park1

Clova AI Research 97 Dec 23, 2022
[Nature Machine Intelligence' 21] "Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in Artificial Intelligence"

[UCADI] COVID-19 Diagnosis With Federated Learning Intro We developed a Federated Learning (FL) Framework for global researchers to collaboratively tr

HUST EIC AI-LAB 30 Dec 12, 2022
Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Shellphish 791 Jan 06, 2023
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Yang Li 12 May 30, 2022
A containerized REST API around OpenAI's CLIP model.

OpenAI's CLIP — REST API This is a container wrapping OpenAI's CLIP model in a RESTful interface. Running the container locally First, build the conta

Santiago Valdarrama 48 Nov 06, 2022
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

3 Aug 03, 2022
Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"

Visually Grounded Bert Language Model This repository is the official implementation of Explainable Semantic Space by Grounding Language to Vision wit

17 Dec 17, 2022
[ICME 2021 Oral] CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning

CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning This repository is the official PyTorch implementation of CORE-Text, a

Jingyang Lin 18 Aug 11, 2022
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022
The pure and clear PyTorch Distributed Training Framework.

The pure and clear PyTorch Distributed Training Framework. Introduction Requirements and Usage Dependency Dataset Basic Usage Slurm Cluster Usage Base

WILL LEE 208 Dec 20, 2022
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
Lexical Substitution Framework

LexSubGen Lexical Substitution Framework This repository contains the code to reproduce the results from the paper: Arefyev Nikolay, Sheludko Boris, P

Samsung 37 Sep 15, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022
Tensorflow implementation of "Learning Deconvolution Network for Semantic Segmentation"

Tensorflow implementation of Learning Deconvolution Network for Semantic Segmentation. Install Instructions Works with tensorflow 1.11.0 and uses the

Fabian Bormann 224 Apr 15, 2022
Asynchronous Advantage Actor-Critic in PyTorch

Asynchronous Advantage Actor-Critic in PyTorch This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learn

Reiji Hatsugai 38 Dec 12, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Hieu Duong 7 Jan 12, 2022