Official implementation of "MetaSDF: Meta-learning Signed Distance Functions"

Related tags

Deep Learningmetasdf
Overview

MetaSDF: Meta-learning Signed Distance Functions

Project Page | Paper | Data

Vincent Sitzmann*, Eric Ryan Chan*, Richard Tucker, Noah Snavely
Gordon Wetzstein
*denotes equal contribution

This is the official implementation of the paper "MetaSDF: Meta-Learning Signed Distance Functions".

In this paper, we show how we may effectively learn a prior over implicit neural representations using gradient-based meta-learning.

While in the paper, we show this for the special case of SDFs with the ReLU nonlinearity, this works formidably well with other types of neural implicit representations - such as our work "SIREN"!

We show you how in our Colab notebook:

Explore MetaSDF in Colab

DeepSDF

A large part of this codebase (directory "3D") is based on the code from the terrific paper "DeepSDF" - check them out!

Get started

If you only want to experiment with MetaSDF, we have written a colab that doesn't require installing anything, and goes through a few other interesting properties of MetaSDF as well - for instance, it turns out you can train SIREN to fit any image in only just three gradient descent steps!

If you want to reproduce all the experiments from the paper, you can then set up a conda environment with all dependencies like so:

conda env create -f environment.yml
conda activate metasdf

3D Experiments

Dataset Preprocessing

Before training a model, you'll first need to preprocess the training meshes. Please follow the preprocessing steps used by DeepSDF if using ShapeNet.

Define an Experiment

Next, you'll need to define the model and hyperparameters for your experiment. Examples are given in 3D/curriculums.py, but feel free to make modifications. Although not present in the original paper, we've included some curriculums with positional encodings and smaller models. These generally perform on par with the original models but require much less memory.

Train a Model

After you've preprocessed your data and have defined your curriculum, you're ready to start training! Navigate to the 3D/scripts directory and run

python run_train.py <curriculum name>.

If training is interupted, pass the flag --load flag to continue training from where you left off.

You should begin seeing printouts of loss, with a summary at every epoch. Checkpoints and Tensorboard summaries are saved to the 'output_dir' directory, as defined in your curriculum. We log raw loss, which is either the composite loss or L1 loss, depending on your experiment definition, as well as a 'Misclassified Percentage'. The 'Misclassified Percentage' is the percentage of samples that the model incorrectly classified as inside or outside the mesh.

Reconstructing Meshes

After training a model, recontruct some meshes using

python run_reconstruct.py <curriculum name> --checkpoint <checkpoint file name>.

The script will use the 'test_split' as defined in the curriculum.

Evaluating Reconstructions

After reconstructing meshes, calculate Chamfer Distances between reconstructions and ground-truth meshes by running

python run_eval.py <reconstruction dir>.

Torchmeta

We're using the excellent torchmeta to implement hypernetworks.

Citation

If you find our work useful in your research, please cite:

       @inproceedings{sitzmann2019metasdf,
            author = {Sitzmann, Vincent
                      and Chan, Eric R.
                      and Tucker, Richard
                      and Snavely, Noah
                      and Wetzstein, Gordon},
            title = {MetaSDF: Meta-Learning Signed
                     Distance Functions},
            booktitle = {Proc. NeurIPS},
            year={2020}
       }

Contact

If you have any questions, please feel free to email the authors.

Owner
Vincent Sitzmann
I'm researching 3D-structured neural scene representations. Ph.D. student in Stanford's Computational Imaging Group.
Vincent Sitzmann
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Facebook Research 536 Jan 06, 2023
Solving SMPL/MANO parameters from keypoint coordinates.

Minimal-IK A simple and naive inverse kinematics solver for MANO hand model, SMPL body model, and SMPL-H body+hand model. Briefly, given joint coordin

Yuxiao Zhou 305 Dec 30, 2022
FS2KToolbox FS2K Dataset Towards the translation between Face

FS2KToolbox FS2K Dataset Towards the translation between Face -- Sketch. Download (photo+sketch+annotation): Google-drive, Baidu-disk, pw: FS2K. For

Deng-Ping Fan 5 Jan 03, 2023
Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts

Face mask detection Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts in order to detect face masks in static im

Vaibhav Shukla 1 Oct 27, 2021
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
social humanoid robots with GPGPU and IoT

Social humanoid robots with GPGPU and IoT Social humanoid robots with GPGPU and IoT Paper Authors Mohsen Jafarzadeh, Stephen Brooks, Shimeng Yu, Balak

0 Jan 07, 2022
Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection

Exploring the Limits of Out-of-Distribution Detection In this repository we're collecting replications for the key experiments in the Exploring the Li

Stanislav Fort 35 Jan 03, 2023
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
Self-Supervised Learning with Kernel Dependence Maximization

Self-Supervised Learning with Kernel Dependence Maximization This is the code for SSL-HSIC, a self-supervised learning loss proposed in the paper Self

DeepMind 29 Dec 29, 2022
Few-shot NLP benchmark for unified, rigorous eval

FLEX FLEX is a benchmark and framework for unified, rigorous few-shot NLP evaluation. FLEX enables: First-class NLP support Support for meta-training

AI2 85 Dec 03, 2022
Survival analysis (SA) is a well-known statistical technique for the study of temporal events.

DAGSurv Survival analysis (SA) is a well-known statistical technique for the study of temporal events. In SA, time-to-an-event data is modeled using a

Rahul Kukreja 1 Sep 05, 2022
RL algorithm PPO and IRL algorithm AIRL written with Tensorflow.

RL algorithm PPO and IRL algorithm AIRL written with Tensorflow. They have a parallel sampling feature in order to increase computation speed (especially in high-performance computing (HPC)).

Fangjian Li 3 Dec 28, 2021
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

23 Oct 21, 2022
Fully Connected DenseNet for Image Segmentation

Fully Connected DenseNets for Semantic Segmentation Fully Connected DenseNet for Image Segmentation implementation of the paper The One Hundred Layers

Somshubra Majumdar 84 Oct 31, 2022
Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations.

S2VC Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In thi

81 Dec 15, 2022
Repository for the paper "Online Domain Adaptation for Occupancy Mapping", RSS 2020

RSS 2020 - Online Domain Adaptation for Occupancy Mapping Repository for the paper "Online Domain Adaptation for Occupancy Mapping", Robotics: Science

Anthony 26 Sep 22, 2022
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021