Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Overview

SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes

Paper | Supp | Video | Project Page | Blog (AITAVG)

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes. We propose a novel forward skinning module to animate neural implicit shapes with good generalization to unseen poses.

If you find our code or paper useful, please cite as

@inproceedings{chen2021snarf,
  title={SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes},
  author={Chen, Xu and Zheng, Yufeng and Black, Michael J and Hilliges, Otmar and Geiger, Andreas},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021}
}

Quick Start

Clone this repo:

git clone https://github.com/xuchen-ethz/snarf.git
cd snarf

Install environment:

conda env create -f environment.yml
conda activate snarf
python setup.py install

Download SMPL models (1.0.0 for Python 2.7 (10 shape PCs)) and move them to the corresponding places:

mkdir lib/smpl/smpl_model/
mv /path/to/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl lib/smpl/smpl_model/SMPL_FEMALE.pkl
mv /path/to/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl lib/smpl/smpl_model/SMPL_MALE.pkl

Download our pretrained models and test motion sequences:

sh ./download_data.sh

Run a quick demo for clothed human:

python demo.py expname=cape subject=3375 demo.motion_path=data/aist_demo/seqs +experiments=cape

You can the find the video in outputs/cape/3375/demo.mp4 and images in outputs/cape/3375/images/. To save the meshes, add demo.save_mesh=true to the command.

You can also try other subjects (see outputs/data/cape for available options) by setting subject=xx, and other motion sequences from AMASS by setting demo.motion_path=/path/to/amass_modetion.npz.

Some motion sequences have high fps and one might want to skip some frames. To do this, add demo.every_n_frames=x to consider every x frame in the motion sequence. (e.g. demo.every_n_frames=10 for PosePrior sequences)

By default, we use demo.fast_mode=true for fast mesh extraction. In this mode, we first extract mesh in canonical space, and then forward skin the mesh to posed space. This bypasses the root finding during inference, thus is faster. However, it's not really deforming a continuous field. To first deform the continuous field and then extract mesh in deformed space, use demo.fast_mode=false instead.

Training and Evaluation

Install Additional Dependencies

Install kaolin for fast occupancy query from meshes.

git clone https://github.com/NVIDIAGameWorks/kaolin
cd kaolin
git checkout v0.9.0
python setup.py develop

Minimally Clothed Human

Prepare Datasets

Download the AMASS dataset. We use ''DFaust Snythetic'' and ''PosePrior'' subsets and SMPL-H format. Unzip the dataset into data folder.

tar -xf DFaust67.tar.bz2 -C data
tar -xf MPILimits.tar.bz2 -C data

Preprocess dataset:

python preprocess/sample_points.py --output_folder data/DFaust_processed
python preprocess/sample_points.py --output_folder data/MPI_processed --skip 10 --poseprior

Training

Run the following command to train for a specified subject:

python train.py subject=50002

Training logs are available on wandb (registration needed, free of charge). It should take ~12h on a single 2080Ti.

Evaluation

Run the following command to evaluate the method for a specified subject on within distribution data (DFaust test split):

python test.py subject=50002

and outside destribution (PosePrior):

python test.py subject=50002 datamodule=jointlim

Generate Animation

You can use the trained model to generate animation (same as in Quick Start):

python demo.py expname='dfaust' subject=50002 demo.motion_path='data/aist_demo/seqs'

Clothed Human

Training

Download the CAPE dataset and unzip into data folder.

Run the following command to train for a specified subject and clothing type:

python train.py datamodule=cape subject=3375 datamodule.clothing='blazerlong' +experiments=cape  

Training logs are available on wandb. It should take ~24h on a single 2080Ti.

Generate Animation

You can use the trained model to generate animation (same as in Quick Start):

python demo.py expname=cape subject=3375 demo.motion_path=data/aist_demo/seqs +experiments=cape

Acknowledgement

We use the pre-processing code in PTF and LEAP with some adaptions (./preprocess). The network and sampling part of the code (lib/model/network.py and lib/model/sample.py) is implemented based on IGR and IDR. The code for extracting mesh (lib/utils/meshing.py) is adapted from NASA. Our implementation of Broyden's method (lib/model/broyden.py) is based on DEQ. We sincerely thank these authors for their awesome work.

A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
[CVPR'22] COAP: Learning Compositional Occupancy of People

COAP: Compositional Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2022 paper COAP: Lear

Marko Mihajlovic 111 Dec 11, 2022
A Python type explainer!

typesplainer A Python typehint explainer! Available as a cli, as a website, as a vscode extension, as a vim extension Usage First, install the package

Typesplainer 79 Dec 01, 2022
Tweesent-back - Tweesent backend uses fastAPI as the web framework

TweeSent Backend Tweesent backend. This repo uses fastAPI as the web framework.

0 Mar 26, 2022
Does Pretraining for Summarization Reuqire Knowledge Transfer?

Pretraining summarization models using a corpus of nonsense

Approximately Correct Machine Intelligence (ACMI) Lab 12 Dec 19, 2022
Randomizes the warps in a stock pokeemerald repo.

pokeemerald warp randomizer Randomizes the warps in a stock pokeemerald repo. Usage Instructions Install networkx and matplotlib via pip3 or similar.

Max Thomas 6 Mar 17, 2022
Just-Now - This Is Just Now Login Friendlist Cloner Tools

JUST NOW LOGIN FRIENDLIST CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 21 Mar 09, 2022
Easy to use Audio Tagging in PyTorch

Audio Classification, Tagging & Sound Event Detection in PyTorch Progress: Fine-tune on audio classification Fine-tune on audio tagging Fine-tune on s

sithu3 15 Dec 22, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

10 Aug 24, 2022
[ACM MM 2021] Yes, "Attention is All You Need", for Exemplar based Colorization

Transformer for Image Colorization This is an implemention for Yes, "Attention Is All You Need", for Exemplar based Colorization, and the current soft

Wang Yin 30 Dec 07, 2022
GPU-accelerated Image Processing library using OpenCL

pyclesperanto pyclesperanto is a python package for clEsperanto - a multi-language framework for GPU-accelerated image processing. clEsperanto uses Op

17 Dec 25, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 04, 2023
This project aims to segment 4 common retinal lesions from Fundus Images.

This project aims to segment 4 common retinal lesions from Fundus Images.

Husam Nujaim 1 Oct 10, 2021
This repository contains code from the paper "TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network"

TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network This repository contains code from the paper "TTS-GAN: A Transformer-based Tim

Intelligent Multimodal Computing and Sensing Laboratory (IMICS Lab) - Texas State University 108 Dec 29, 2022
Image super-resolution through deep learning

srez Image super-resolution through deep learning. This project uses deep learning to upscale 16x16 images by a 4x factor. The resulting 64x64 images

David Garcia 5.3k Dec 28, 2022
RANZCR-CLiP 7th Place Solution

RANZCR-CLiP 7th Place Solution This repository is WIP. (18 Mar 2021) Installation git clone https://github.com/analokmaus/kaggle-ranzcr-clip-public.gi

Hiroshechka Y 21 Oct 22, 2022
Source code and data from the RecSys 2020 article "Carousel Personalization in Music Streaming Apps with Contextual Bandits" by W. Bendada, G. Salha and T. Bontempelli

Carousel Personalization in Music Streaming Apps with Contextual Bandits - RecSys 2020 This repository provides Python code and data to reproduce expe

Deezer 48 Jan 02, 2023