PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

Overview

PyNIF3D

License: MIT Read the Docs

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

As of August 2021, the following implementations are supported:

Installation

To get started with PyNIF3D, you can use pip to install a copy of this repository on your local machine or build the provided Dockerfile.

Local Installation

pip install --user "https://github.com/pfnet/pynif3d.git"

The following packages need to be installed in order to ensure the proper functioning of all the PyNIF3D features:

  • torch_scatter>=1.3.0
  • torchsearchsorted>=1.0

A script has been provided to take care of the installation steps for you. Please download it to a directory of choice and run:

bash post_install.bash

Docker Build

Enabling CUDA Support

Please make sure the following dependencies are installed in order to build the Docker image with CUDA support:

  • nvidia-docker
  • nvidia-container-runtime

Then register the nvidia runtime by adding the following to /etc/docker/daemon.json:

{
    "runtimes": {
        "nvidia": {
            [...]
        }
    },
    "default-runtime": "nvidia"
}

Restart the Docker daemon:

sudo systemctl restart docker

You should now be able to build a Docker image with CUDA support.

Building Dockerfile

git clone https://github.com/pfnet/pynif3d.git
cd pynif3d && nvidia-docker build -t pynif3d .

Running the Container

nvidia-docker run -it pynif3d bash

Tutorials

Get started with PyNIF3D using the examples provided below:

NeRF Tutorial CON Tutorial IDR Tutorial

In addition to the tutorials, pretrained models are also provided and ready to be used. Please consult this page for more information.

License

PyNIF3D is released under the MIT license. Please refer to this document for more information.

Contributing

We welcome any new contributions to PyNIF3D. Please make sure to read the contributing guidelines before submitting a pull request.

Documentation

Learn more about PyNIF3D by reading the API documentation.

Comments
  • [Question] The default train-run of CON caused Out-Of-Memory

    [Question] The default train-run of CON caused Out-Of-Memory

    (Not urgent question.)

    I run the training script in the example of CON with the default args (= grid mode) using ShapeNet (downloaded by occupancy_networks repo's script) using 32GB GPU. However, it caused OOM. When setting -bs 24, it works (memory usage 30622MiB / 32510MiB). Is this an intended behavior?

    $ python -u examples/con/train.py -dd /mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/occupancy_networks/data/ShapeNet -sd saved_models_grid
    Traceback (most recent call last):
      File "examples/con/train.py", line 218, in <module>
        main()
      File "examples/con/train.py", line 214, in main
        train(dataset, model, optimizer, args)
      File "examples/con/train.py", line 103, in train
        prediction = model(input_points, query_points)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/pipeline/con.py", line 99, in forward
        features = self.feature_encoder(input_points)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/local_pool_pointnet.py", line 275, in forward
        input_points, c, feature_grid=grid_id
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/local_pool_pointnet.py", line 191, in generate_coordinate_features
        fea_grid = self.feature_processing_fn(fea_grid)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 289, in forward
        x = layer(encoders_features[idx + 1], x)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 172, in forward
        x = self.layer(x)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 82, in forward
        x = self.relu(self.convolution1(self.group_norm1(x)))
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/normalization.py", line 246, in forward
        input, self.num_groups, self.weight, self.bias, self.eps)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2112, in group_norm
        torch.backends.cudnn.enabled)
    RuntimeError: CUDA out of memory. Tried to allocate 3.00 GiB (GPU 0; 31.75 GiB total capacity; 27.60 GiB already allocated; 2.92 GiB free; 27.72 GiB reserved in total by PyTorch)
    

    The env (at mnj) is here: (I run https://github.com/pytorch/pytorch/blob/master/torch/utils/collect_env.py)

    PyTorch version: 1.7.1
    Is debug build: False
    CUDA used to build PyTorch: 10.2
    ROCM used to build PyTorch: N/A
    
    OS: Ubuntu 18.04.5 LTS (x86_64)
    GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
    Clang version: Could not collect
    CMake version: version 3.10.2
    Libc version: glibc-2.10
    
    Python version: 3.7.4 (default, Aug 13 2019, 20:35:49)  [GCC 7.3.0] (64-bit runtime)
    Python platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid
    Is CUDA available: True
    CUDA runtime version: 10.2.89
    GPU models and configuration:
    GPU 0: Tesla V100-SXM2-32GB
    GPU 1: Tesla V100-SXM2-32GB
    
    Nvidia driver version: 460.91.03
    cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    
    Versions of relevant libraries:
    [pip3] numpy==1.20.1
    [pip3] pytorch-pfn-extras==0.3.2
    [pip3] torch==1.7.1
    [pip3] torchtext==0.8.1
    [pip3] torchvision==0.8.2
    [conda] blas                      1.0                         mkl
    [conda] cudatoolkit               10.2.89              hfd86e86_1
    [conda] mkl                       2020.2                      256
    [conda] mkl-service               2.3.0            py37he8ac12f_0
    [conda] mkl_fft                   1.3.0            py37h54f3939_0
    [conda] mkl_random                1.1.1            py37h0573a6f_0
    [conda] numpy                     1.19.2           py37h54aff64_0
    [conda] numpy-base                1.19.2           py37hfa32c7d_0
    [conda] pytorch                   1.7.1           py3.7_cuda10.2.89_cudnn7.6.5_0    pytorch
    [conda] pytorch3d                 0.4.0           py37_cu102_pyt171    pytorch3d
    [conda] torchvision               0.8.2                py37_cu102    pytorch
    
    question high priority 
    opened by soskek 3
  • Add badge for readthedocs.org

    Add badge for readthedocs.org

    Add a badge for displaying the status of the API documentation build.

    Tasks to be completed

    • [ ] Update README.md

    Definition of Done The badge correctly shows up on README.md

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add .readthedocs.yaml

    Add .readthedocs.yaml

    The API documentation successfully builds locally, but not when the project is imported into readthedocs.org.

    Tasks to be completed

    • [ ] Add .readthedocs.yaml

    Definition of Done The documentation successfully builds

    normal priority size-XS 
    opened by mihaimorariu 0
  • Remove post_install.bash

    Remove post_install.bash

    The installation procedure currently requires running the post_install.bash script in order to install torchsearchsorted and torch_scatter. These dependencies should be added to setup.py instead, allowing users to install PyNIF3D simply via pip install -e. The only reason why the post installation script exists is because PyNIF3D has not yet been tested with the newer versions of the two dependencies.

    Tasks to be completed

    • [ ] TODO

    Definition of Done A clear and concise description of the conditions for marking the issue as completed.

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add color jitter and on-the-fly loading to the DTU dataset loader (pixelNeRF)

    Add color jitter and on-the-fly loading to the DTU dataset loader (pixelNeRF)

    Implement the DTU dataset loader for the pixelNeRF paper.

    Tasks to be completed

    • [x] Implement the color jitter
    • [x] Implement the on-the-fly loading
    • [x] Review

    Definition of Done All unit tests are passing.

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add pipeline for PixelNeRF

    Add pipeline for PixelNeRF

    Integrate all the components of PixelNeRF into the pipeline.

    Tasks to be completed

    • [ ] Implement PixelNeRF pipeline
    • [ ] Add unit tests
    • [ ] Review

    Definition of Done All unit tests are passing.

    feature normal priority size-M 
    opened by mihaimorariu 0
  • Pixel to camera conversion

    Pixel to camera conversion

    Add helper function for pixel to camera conversion.

    Tasks to be completed

    • [ ] Implement the helper function
    • [ ] Add unit tests
    • [ ] Review

    Definition of Done All the unit tests are passing.

    feature normal priority size-XS 
    opened by mihaimorariu 0
  • Add pixelNeRF to the repository

    Add pixelNeRF to the repository

    The pixelNeRF paper will be added to the repository: https://arxiv.org/abs/2012.02190

    Tasks to be completed

    • [x] Implement DTU dataset loader
    • [x] Implement the encoder
    • [x] Implement the NIF model
    • [x] Implement the renderer
    • [x] Implement the pipeline
    • [x] Implement the losses
    • [x] Write tutorial on how to use the code
    • [ ] Review

    Definition of Done

    • [x] The results are reproduced
    • [x] Training, evaluation scripts are provided
    • [x] Tutorial is provided
    feature normal priority size-L 
    opened by mihaimorariu 0
  • Support for multi-batch processing in torchsearchsorted

    Support for multi-batch processing in torchsearchsorted

    The implementation of torchsearchsorted that is currently being used does not support multi-batch processing. A for loop in currently being used in NeRF training for handling a batch size larger than one, but that significantly slows down the training process. This needs to be fixed.

    Tasks to be completed

    • [ ] TODO

    Definition of Done Training NeRF with batch size > 1 yields similar PSNR on the evaluation set after removing the for loop and replacing it with a multi-batch-based torchsearchsorted.

    feature low priority size-XS 
    opened by mihaimorariu 0
Releases(0.1)
  • 0.1(Aug 18, 2021)

    Initial version of PyNIF3D.

    Changelog:

    • Added a decoupled structure for NIF-based inference and training
      • Sampling functionalities (ray/pixel/feature)
      • NIF model renderering with generic chunking
      • Aggregation functionalities to generate final pixel/occupancy
    • Added dataset loaders:
      • LLFF
      • NeRF Blender
      • Deep Voxels
      • Shapes3D
      • DTU MVS
    • Added algorithm pipelines:
      • Convolutional Occupancy Networks (CON)
      • Neural Radiance Fields (NeRF)
      • Implicit Differentiable Renderer (IDR)
    • Added encoders:
      • Positional encoding
      • Fourier encoding
    • Added pre-trained models
    • Added generation of rays given camera matrix function
    • Added generic layer generation with bias and weight initializers
    • Added detailed logging structure through decorators
      • If the flag is set to DEBUG, the function inputs/outputs can be logged - this is expected to reduce the debugging duration
    • Added explanatory exceptions and exception messages
    • Added tutorials and sample scripts
    • Added unit tests
    • Added linter
    • Added Sphinx configuration support
    • Added Dockerfile and pip installation support
    • Added comprehensible documentation to each function
    • Added CI support
    Source code(tar.gz)
    Source code(zip)
Owner
Preferred Networks, Inc.
Preferred Networks, Inc.
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that all

Preferred Networks, Inc. 96 Nov 28, 2022
higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of "meta" aspects of these

Facebook Research 1.5k Jan 03, 2023
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Henrique Morimitsu 105 Dec 16, 2022
Over9000 optimizer

Optimizers and tests Every result is avg of 20 runs. Dataset LR Schedule Imagenette size 128, 5 epoch Imagewoof size 128, 5 epoch Adam - baseline OneC

Mikhail Grankin 405 Nov 27, 2022
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
270 Dec 24, 2022
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS

(Generic) EfficientNets for PyTorch A 'generic' implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter ef

Ross Wightman 1.5k Jan 01, 2023
Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch

Hyeongchan Kim 76 Jan 03, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
Training PyTorch models with differential privacy

Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the cli

1.3k Dec 29, 2022
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute

Lambda Networks - Pytorch Implementation of λ Networks, a new approach to image recognition that reaches SOTA on ImageNet. The new method utilizes λ l

Phil Wang 1.5k Jan 07, 2023
Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Quiver Team 221 Dec 22, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
torch-optimizer -- collection of optimizers for Pytorch

torch-optimizer torch-optimizer -- collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim

Nikolay Novik 2.6k Jan 03, 2023
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

Kim Seonghyeon 433 Dec 27, 2022
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022