[CVPR'21] Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

Related tags

Deep LearningPTF
Overview

Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

This repository contains the implementation of our paper Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration . The code is largely based on Occupancy Networks - Learning 3D Reconstruction in Function Space.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code useful, please consider citing:

@InProceedings{PTF:CVPR:2021,
    author = {Shaofei Wang and Andreas Geiger and Siyu Tang},
    title = {Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration},
    booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

Installation

This repository has been tested on the following platforms:

  1. Python 3.7, PyTorch 1.6 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04
  2. Python 3.7, PyTorch 1.6 with CUDA 10.1 and cuDNN 7.6.4, CentOS 7.9.2009

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called PTF using

conda env create -n PTF python=3.7
conda activate PTF

Second, install PyTorch 1.6 via the official PyTorch website.

Third, install dependencies via

pip install -r requirements.txt

Fourth, manually install pytorch-scatter.

Lastly, compile the extension modules. You can do this via

python setup.py build_ext --inplace

(Optional) if you want to use the registration code under smpl_registration/, you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.

(Optional) if you want to train/evaluate single-view models (which corresponds to configurations in configs/cape_sv), you need to install OpenDR to render depth images. You need to first install OSMesa, here is the command of installing it on Ubuntu:

sudo apt-get install libglu1-mesa-dev freeglut3-dev mesa-common-dev libosmesa6-dev

For installing OSMesa on CentOS 7, please check this related issue. After installing OSMesa, install OpenDR via:

pip install opendr

Build the dataset

To prepare the dataset for training/evaluation, you have to first download the CAPE dataset from the CAPE website.

  1. Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to ./body_models/smpl/, eventually, the ./body_models folder should have the following structure:
    body_models
     └-- smpl
     	├-- male
     	|   └-- model.pkl
     	└-- female
     	    └-- model.pkl
    
    

Besides the SMPL models, you will also need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/. Finally, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be save into ./body_models/misc/.

  1. Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
    ${CAPE_ROOT}
     ├-- 00032
     ├-- 00096
     |   ...
     ├-- 03394
     └-- cape_release
    
    
  2. Create data directory under the project directory.
  3. Modify the parameters in preprocess/build_dataset.sh accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/evaluation data.
  4. Run preprocess/build_dataset.sh to preprocess the CAPE dataset.

Pre-trained models

We provide pre-trained PTF and IP-Net models with two encoder resolutions, that is, 64x3 and 128x3. After downloading them, please put them under respective directories ./out/cape or ./out/cape_sv.

Generating Meshes

To generate all evaluation meshes using a trained model, use

python generate.py configs/cape/{config}.yaml

Alternatively, if you want to parallelize the generation on a HPC cluster, use:

python generate.py --subject-idx ${SUBJECT_IDX} --sequence-idx ${SEQUENCE_IDX} configs/cape/${config}.yaml

to generate meshes for specified subject/sequence combination. A list of all subject/sequence combinations can be found in ./misc/subject_sequence.txt.

SMPL/SMPL+D Registration

To register SMPL/SMPL+D models to the generated meshes, use either of the following:

python smpl_registration/fit_SMPLD_PTFs.py --num-joints 24 --use-parts --init-pose configs/cape/${config}.yaml # for PTF
python smpl_registration/fit_SMPLD_PTFs.py --num-joints 14 --use-parts configs/cape/${config}.yaml # for IP-Net

Note that registration is very slow, taking roughly 1-2 minutes per frame. If you have access to HPC cluster, it is advised to parallelize over subject/sequence combinations using the same subject/sequence input arguments for generating meshes.

Training

Finally, to train a new network from scratch, run

python train.py --num_workers 8 configs/cape/${config}.yaml

You can monitor on http://localhost:6006 the training process using tensorboard:

tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006

where you replace ${OUTPUT_DIR} with the respective output directory.

License

We employ MIT License for the PTF code, which covers

extract_smpl_parameters.py
generate.py
train.py
setup.py
im2mesh/
preprocess/

Modules not covered by our license are modified versions from IP-Net (./smpl_registration) and SMPL-X (./human_body_prior); for these parts, please consult their respective licenses and cite the respective papers.

[ICML 2022] The official implementation of Graph Stochastic Attention (GSAT).

Graph Stochastic Attention (GSAT) The official implementation of GSAT for our paper: Interpretable and Generalizable Graph Learning via Stochastic Att

85 Nov 27, 2022
ParmeSan: Sanitizer-guided Greybox Fuzzing

ParmeSan: Sanitizer-guided Greybox Fuzzing ParmeSan is a sanitizer-guided greybox fuzzer based on Angora. Published Work USENIX Security 2020: ParmeSa

VUSec 158 Dec 31, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
Recurrent Neural Network Tutorial, Part 2 - Implementing a RNN in Python and Theano

Please read the blog post that goes with this code! Jupyter Notebook Setup System Requirements: Python, pip (Optional) virtualenv To start the Jupyter

Denny Britz 863 Dec 15, 2022
To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

Larissa Sayuri Futino Castro dos Santos 1 Jan 20, 2022
Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 87 Jan 03, 2023
Doge-Prediction - Coding Club prediction ig

Doge-Prediction Coding Club prediction ig Basically: Create an application that

1 Jan 10, 2022
An Api for Emotion recognition.

PLAYEMO Playemo was built from the ground-up with Flask, a python tool that makes it easy for developers to build APIs. Use Cases Is Python your langu

greek geek 2 Jul 16, 2022
Show-attend-and-tell - TensorFlow Implementation of "Show, Attend and Tell"

Show, Attend and Tell Update (December 2, 2016) TensorFlow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attent

Yunjey Choi 902 Nov 29, 2022
MPRNet-Cloud-removal: Progressive cloud removal

MPRNet-Cloud-removal Progressive cloud removal Requirements 1.Pytorch = 1.0 2.Python 3 3.NVIDIA GPU + CUDA 9.0 4.Tensorboard Installation 1.Clone the

Semi 95 Dec 18, 2022
Pairwise model for commonlit competition

Pairwise model for commonlit competition To run: - install requirements - create input directory with train_folds.csv and other competition data - cd

abhishek thakur 45 Aug 31, 2022
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
A Temporal Extension Library for PyTorch Geometric

Documentation | External Resources | Datasets PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric. The library

Benedek Rozemberczki 1.9k Jan 07, 2023
Yolov5 + Deep Sort with PyTorch

딥소트 수정중 Yolov5 + Deep Sort with PyTorch Introduction This repository contains a two-stage-tracker. The detections generated by YOLOv5, a family of obj

1 Nov 26, 2021
Simple tutorials on Pytorch DDP training

pytorch-distributed-training Distribute Dataparallel (DDP) Training on Pytorch Features Easy to study DDP training You can directly copy this code for

Ren Tianhe 188 Jan 06, 2023
A Deep Learning Framework for Neural Derivative Hedging

NNHedge NNHedge is a PyTorch based framework for Neural Derivative Hedging. The following repository was implemented to ease the experiments of our pa

GUIJIN SON 17 Nov 14, 2022
FairFuzz: AFL extension targeting rare branches

FairFuzz An AFL extension to increase code coverage by targeting rare branches. FairFuzz has a particular advantage on programs with highly nested str

Caroline Lemieux 222 Nov 16, 2022
Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification

Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification

258 Dec 29, 2022