(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

Related tags

Deep LearningProHMR
Overview

ProHMR - Probabilistic Modeling for Human Mesh Recovery

Code repository for the paper:
Probabilistic Modeling for Human Mesh Recovery
Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, Kostas Daniilidis
ICCV 2021
[paper] [project page] [colab notebook]

teaser

Installation instructions

We recommend creating a clean conda environment and install all dependencies. You can do this as follows:

conda env create -f environment.yml

After the installation is complete you can activate the conda environment by running:

conda activate prohmr

Alternatively, you can also create a virtual environment:

python -m venv .prohmr_venv
source .prohmr_venv/bin/activate
pip install -r requirements.txt

The last step is to install prohmr as a Python package. This will allow you to import it from anywhere in your system. Since you might want to modify the code, we recommend installing as follows:

python setup.py develop

In case you want to evaluate our approach on Human3.6M, you also need to manually install the pycdf package of the spacepy library to process some of the original files. If you face difficulties with the installation, you can find more elaborate instructions here.

Fetch data

Download the pretrained model checkpoint together with some additional data (joint regressors, etc.) and place them under data/. We provide a script to fetch the necessary data for training and evaluation. You need to run:

./fetch_data.sh

Besides these files, you also need to download the SMPL model. You will need the neutral model for training and running the demo code, while the male and female models will be necessary for preprocessing the 3DPW dataset. Please go to the websites for the corresponding projects and register to get access to the downloads section. Create a folder data/smpl/ and place the models there.

Run demo code

The easiest way to try our demo is by providing images with their corresponding OpenPose detections. These are used to compute the bounding boxes around the humans and optionally fit the SMPL body model to the keypoint detections. We provide some example images in the example_data/ folder. You can test our network on these examples by running:

python demo.py --img_folder=example_data/images --keypoint_folder=example_data/keypoints --out_folder=out --run_fitting

You might see some warnings about missing keys for SMPL components, which you can ignore. The code will save the rendered results for the regression and fitting in the newly created out/ directory. By default the demo code performs the fitting in the image crop and not in the original image space. If you want to instead fit in the original image space you can pass the --full_frame flag.

Colab Notebook

We also provide a Colab Notebook here where you can test our method on videos from YouTube. Check it out!

Dataset preprocessing

Besides the demo code, we also provide code to train and evaluate our models on the datasets we employ for our empirical evaluation. Before continuing, please make sure that you follow the details for data preprocessing.

Run evaluation code

The evaluation code is contained in eval/. We provide 4 different evaluation scripts.

  • eval_regression.py is used to evaluate ProHMR as a regression model as in Table 1 of the paper.
  • eval_keypoint_fitting.py is used to evaluate the fitting on 2D keypoints as in Table 3 of the paper.
  • eval_multiview.py is used to evaluate the multi-view refinement as in Table 5 of the paper.
  • eval_skeleton.py is used to evaluate the probablistic 2D pose lifiting network similarly with Table 6 of the main paper. Example usage:
python eval/eval_keypoint_fitting.py --dataset=3DPW-TEST

Running the above command will compute the Reconstruction Error before and after the fitting on the test set of 3DPW. For more information on the available command line options you can run the command with the --help argument.

Run training code

Due to license limitiations, we cannot provide the SMPL parameters for Human3.6M (recovered using MoSh). Even if you do not have access to these parameters, you can still use our training code using data from the other datasets. Again, make sure that you follow the details for data preprocessing. Alternatively you can use the SMPLify 3D fitting code to generate SMPL parameter annotations by fitting the model to the 3D keypoints provided by the dataset. Example usage:

python train/train_prohmr.py --root_dir=prohmr_reproduce/

This will train the model using the default config file prohmr/configs/prohmr.yaml as described in the paper. It will also create the folders prohmr_reproduce/checkpoints and prohmr_reproduce/tensorboard where the model checkpoints and Tensorboard logs will be saved.

We also provide the training code for the probabilistic version of Martinez et al. We are not allowed to redistribute the Stacked Hourglass keypoint detections used in training the model in the paper, so in this version of the code we replace them with the ground truth 2D keypoints of the dataset. You can train the skeleton model by running:

python train/train_skeleton.py --root_dir=skeleton_lifting/

Running this script will produce a similar output with the ProHMR training script.

Acknowledgements

Parts of the code are taken or adapted from the following repos:

Citing

If you find this code useful for your research or the use data generated by our method, please consider citing the following paper:

@Inproceedings{kolotouros2021prohmr,
  Title          = {Probabilistic Modeling for Human Mesh Recovery},
  Author         = {Kolotouros, Nikos and Pavlakos, Georgios and Jayaraman, Dinesh and Daniilidis, Kostas},
  Booktitle      = {ICCV},
  Year           = {2021}
}
Owner
Nikos Kolotouros
I am a CS PhD student at the University of Pennsylvania working on Computer Vision and Machine Learning.
Nikos Kolotouros
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
Controlling the MicriSpotAI robot from scratch

Project-MicroSpot-AI Controlling the MicriSpotAI robot from scratch Colaborators Alexander Dennis Components from MicroSpot The MicriSpotAI has the fo

Dennis Núñez-Fernández 5 Oct 20, 2022
Jax/Flax implementation of Variational-DiffWave.

jax-variational-diffwave Jax/Flax implementation of Variational-DiffWave. (Zhifeng Kong et al., 2020, Diederik P. Kingma et al., 2021.) DiffWave with

YoungJoong Kim 37 Dec 16, 2022
[CVPR'22] Official PyTorch Implementation of Collaborative Transformers for Grounded Situation Recognition

[CVPR'22] Collaborative Transformers for Grounded Situation Recognition Paper | Model Checkpoint This is the official PyTorch implementation of Collab

Junhyeong Cho 29 Dec 10, 2022
PoolFormer: MetaFormer is Actually What You Need for Vision

PoolFormer: MetaFormer is Actually What You Need for Vision (arXiv) This is a PyTorch implementation of PoolFormer proposed by our paper "MetaFormer i

Sea AI Lab 1k Dec 30, 2022
PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"

Non-Autoregressive Transformer Code release for Non-Autoregressive Neural Machine Translation by Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K.

Salesforce 261 Nov 12, 2022
Breaking the Curse of Space Explosion: Towards Efficient NAS with Curriculum Search

Breaking the Curse of Space Explosion: Towards Effcient NAS with Curriculum Search Pytorch implementation for "Breaking the Curse of Space Explosion:

guoyong 17 Jan 03, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022
Credit fraud detection in Python using a Jupyter Notebook

Credit-Fraud-Detection - Credit fraud detection in Python using a Jupyter Notebook , using three classification models (Random Forest, Gaussian Naive Bayes, Logistic Regression) from the sklearn libr

Ali Akram 4 Dec 28, 2021
AFL binary instrumentation

E9AFL --- Binary AFL E9AFL inserts American Fuzzy Lop (AFL) instrumentation into x86_64 Linux binaries. This allows binaries to be fuzzed without the

242 Dec 12, 2022
Language-Agnostic Website Embedding and Classification

Homepage2Vec Language-Agnostic Website Embedding and Classification based on Curlie labels https://arxiv.org/pdf/2201.03677.pdf Homepage2Vec is a pre-

25 Dec 27, 2022
[ICCV 2021] Focal Frequency Loss for Image Reconstruction and Synthesis

Focal Frequency Loss - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Focal Fre

Liming Jiang 460 Jan 04, 2023
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

SK T-Brain 754 Dec 29, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 23 Oct 19, 2022
李云龙二次元风格化!打滚卖萌,使用了animeGANv2进行了视频的风格迁移

李云龙二次元风格化!一键star、fork,你也可以生成这样的团长! 打滚卖萌求star求fork! 0.效果展示 视频效果前往B站观看效果最佳:李云龙二次元风格化: github开源repo:李云龙二次元风格化 百度AIstudio开源地址,一键fork即可运行: 李云龙二次元风格化!一键fork

oukohou 44 Dec 04, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022