Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Related tags

Deep Learninghumor
Overview

HuMoR: 3D Human Motion Model for Robust Pose Estimation (ICCV 2021)

This is the official implementation for the ICCV 2021 paper. For more information, see the project webpage.

HuMoR Teaser

Environment Setup

Note: This code was developed on Ubuntu 16.04/18.04 with Python 3.7, CUDA 10.1 and PyTorch 1.6.0. Later versions should work, but have not been tested.

Create and activate a virtual environment to work in, e.g. using Conda:

conda create -n humor_env python=3.7
conda activate humor_env

Install CUDA and PyTorch 1.6. For CUDA 10.1, this would look like:

conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch

Install the remaining requirements with pip:

pip install -r requirements.txt

You must also have ffmpeg installed on your system to save visualizations.

Downloads & External Dependencies

This codebase relies on various external downloads in order to run for certain modes of operation. Here we briefly overview each and what they are used for. Detailed setup instructions are linked in other READMEs.

Body Model and Pose Prior

Detailed instructions to install SMPL+H and VPoser are in this documentation.

  • SMPL+H is used for the pose/shape body model. Downloading this model is necessary for all uses of this codebase.
  • VPoser is used as a pose prior only during the initialization phase of fitting, so it's only needed if you are using the test-time optimization functionality of this codebase.

Datasets

Detailed instructions to install, configure, and process each dataset are in this documentation.

  • AMASS motion capture data is used to train and evaluate (e.g. randomly sample) the HuMoR motion model and for fitting to 3D data like noisy joints and partial keypoints.
  • i3DB contains RGB videos with heavy occlusions and is only used in the paper to evaluate test-time fitting to 2D joints.
  • PROX contains RGB-D videos and is only used in the paper to evaluate test-time fitting to 2D joints and 3D point clouds.

Pretrained Models

Pretrained model checkpoints are available for HuMoR, HuMoR-Qual, and the initial state Gaussian mixture. To download (~215 MB), from the repo root run bash get_ckpt.sh.

OpenPose

OpenPose is used to detect 2D joints for fitting to arbitrary RGB videos. If you will be running test-time optimization on the demo video or your own videos, you must install OpenPose. To clone and build, please follow the OpenPose README in their repo.

Optimization in run_fitting.py assumes OpenPose is installed at ./external/openpose by default - if you install elsewhere, please pass in the location using the --openpose flag.

Fitting to RGB Videos (Test-Time Optimization)

To run motion/shape estimation on an arbitrary RGB video, you must have SMPL+H, VPoser, OpenPose, and a pretrained HuMoR model as detailed above. We have included a demo video in this repo along with a few example configurations to get started.

Note: if running on your own video, make sure the camera is not moving and the person is not interacting with uneven terrain in the scene (we assume a single ground plane). Also, only one person will be reconstructed.

To run the optimization on the demo video use:

python humor/fitting/run_fitting.py @./configs/fit_rgb_demo_no_split.cfg

This configuration optimizes over the entire video (~3 sec) at once (i.e. over all frames). If your video is longer than 2-3 sec, it is recommended to instead use the settings in ./configs/fit_rgb_demo_use_split.cfg which adds the --rgb-seq-len, --rgb-overlap-len, and --rgb-overlap-consist-weight arguments. Using this configuration, the input video is split into multiple overlapping sub-sequences and optimized in a batched fashion (with consistency losses between sub-sequences). This increases efficiency, and lessens the need to tune parameters based on video length. Note the larger the batch size, the better the results will be.

If known, it's highly recommended to pass in camera intrinsics using the --rgb-intrinsics flag. See ./configs/intrinsics_default.json for an example of what this looks like. If intrinsics are not given, default focal lengths are used.

Finally, this demo does not use PlaneRCNN to initialize the ground as described in the paper. Instead, it roughly initializes the ground at y = 0.5 (with camera up-axis -y). We found this to be sufficient and often better than using PlaneRCNN. If you want to use PlaneRCNN instead, set up a separate environment, follow their install instructions, then use the following command to run their method where example_image_dir contains a single frame from your video and the camera parameters: python evaluate.py --methods=f --suffix=warping_refine --dataset=inference --customDataFolder=example_image_dir. The results directory can be passed into our optimization using the --rgb-planercnn-res flag.

Visualizing RGB Results

The optimization is performed in 3 stages, with stages 1 & 2 being initialization using a pose prior and smoothing (i.e. the VPoser-t baseline) and stage 3 being the full optimization with the HuMoR motion prior. So for the demo, the final output for the full sequence will be saved in ./out/rgb_demo_no_split/results_out/final_results/stage3_results.npz. To visualize results from the fitting use something like:

python humor/fitting/viz_fitting_rgb.py  --results ./out/rgb_demo_no_split/results_out --out ./out/rgb_demo_no_split/viz_out --viz-prior-frame

By default, this will visualize the final full video result along with each sub-sequence separately (if applicable). Please use --help to see the many additional visualization options. This code is also useful to see how to load in and use the results for other tasks, if desired.

Fitting on Specific Datasets

Next, we detail how to run and evaluate the test-time optimization on the various datasets presented in the paper. In all these examples, the default batch size is quite small to accomodate smaller GPUs, but it should be increased depending on your system.

AMASS 3D Data

There are multiple settings possible for fitting to 3D data (e.g. noisy joints, partial keypoints, etc...), which can be specified using configuration flags. For example, to fit to partial upper-body 3D keypoints sampled from AMASS data, run:

python humor/fitting/run_fitting.py @./configs/fit_amass_keypts.cfg

Optimization results can be visualized using

python humor/fitting/eval_fitting_3d.py --results ./out/amass_verts_upper_fitting/results_out --out ./out/amass_verts_upper_fitting/eval_out  --qual --viz-stages --viz-observation

and evaluation metrics computed with

python humor/fitting/eval_fitting_3d.py --results ./out/amass_verts_upper_fitting/results_out --out ./out/amass_verts_upper_fitting/eval_out  --quant --quant-stages

The most relevant quantitative results will be written to eval_out/eval_quant/compare_mean.csv.

i3DB RGB Data

The i3DB dataset contains RGB videos with many occlusions along with annotated 3D joints for evaluation. To run test-time optimization on the full dataset, use:

python humor/fitting/run_fitting.py @./configs/fit_imapper.cfg

Results can be visualized using the same script as in the demo:

python humor/fitting/viz_fitting_rgb.py  --results ./out/imapper_fitting/results_out --out ./out/imapper_fitting/viz_out --viz-prior-frame

Quantitative evaluation (comparing to results after each optimization stage) can be run with:

python humor/fitting/eval_fitting_2d.py --results ./out/imapper_fitting/results_out --dataset iMapper --imapper-floors ./data/iMapper/i3DB/floors --out ./out/imapper_fitting/eval_out --quant --quant-stages

The final quantitative results will be written to eval_out/eval_quant/compare_mean.csv.

PROX RGB/RGB-D Data

PROX contains RGB-D data so affords fitting to just 2D joints and 2D joints + 3D point cloud. The commands for running each of these are quite similar, just using different configuration files. For running on the full RGB-D data, use:

python humor/fitting/run_fitting.py @./configs/fit_proxd.cfg

Visualization must add the --flip-img flag to align with the original PROX videos:

python humor/fitting/viz_fitting_rgb.py  --results ./out/proxd_fitting/results_out --out ./out/proxd_fitting/viz_out --viz-prior-frame --flip-img

Quantitative evalution (of plausibility metrics) for full RGB-D data uses

python humor/fitting/eval_fitting_2d.py --results ./out/proxd_fitting/results_out --dataset PROXD --prox-floors ./data/prox/qualitative/floors --out ./out/proxd_fitting/eval_out --quant --quant-stages

and for just RGB data is slightly different:

python humor/fitting/eval_fitting_2d.py --results ./out/prox_fitting/results_out --dataset PROX --prox-floors ./data/prox/qualitative/floors --out ./out/prox_fitting/eval_out --quant --quant-stages

Training & Testing Motion Model

There are two versions of our model: HuMoR and HuMoR-Qual. HuMoR is the main model presented in the paper and is best suited for test-time optimization. HuMoR-Qual is a slight variation on HuMoR that gives more stable and qualitatively superior results for random motion generation (see the paper for details).

Below we describe how to train and test HuMoR, but the exact same commands are used for HuMoR-Qual with a different configuration file at each step (see all provided configs).

Training HuMoR

To train HuMoR from scratch, make sure you have the processed version of the AMASS dataset at ./data/amass_processed and run:

python humor/train/train_humor.py @./configs/train_humor.cfg

The default batch size is meant for a 16 GB GPU.

Testing HuMoR

After training HuMoR or downloading the pretrained checkpoints, we can evaluate the model in multiple ways

To compute single-step losses (the exact same as during training) over the entire test set run:

python humor/test/test_humor.py @./configs/test_humor.cfg

To randomly sample a motion sequence and save a video visualization, run:

python humor/test/test_humor.py @./configs/test_humor_sampling.cfg

If you'd rather visualize the sampling results in an interactive viewer, use:

python humor/test/test_humor.py @./configs/test_humor_sampling_debug.cfg

Try adding --viz-pred-joints, --viz-smpl-joints, or --viz-contacts to the end of the command to visualize more outputs, or increasing the value of --eval-num-samples to sample the model multiple times from the same initial state. --help can always be used to see all flags and their descriptions.

Training Initial State GMM

Test-time optimization also uses a Gaussian mixture model (GMM) prior over the initial state of the sequence. The pretrained model can be downloaded above, but if you wish to train from scratch, run:

python humor/train/train_state_prior.py --data ./data/amass_processed --out ./out/init_state_prior_gmm --gmm-comps 12

Citation

If you found this code or paper useful, please consider citing:

@inproceedings{rempe2021humor,
    author={Rempe, Davis and Birdal, Tolga and Hertzmann, Aaron and Yang, Jimei and Sridhar, Srinath and Guibas, Leonidas J.},
    title={HuMoR: 3D Human Motion Model for Robust Pose Estimation},
    booktitle={International Conference on Computer Vision (ICCV)},
    year={2021}
}

Questions?

If you run into any problems or have questions, please create an issue or contact Davis (first author) via email.

Owner
Davis Rempe
Davis Rempe
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Public implementation of "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression" from CoRL'21

Self-Supervised Reward Regression (SSRR) Codebase for CoRL 2021 paper "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression "

19 Dec 12, 2022
Interactive dimensionality reduction for large datasets

BlosSOM 🌼 BlosSOM is a graphical environment for running semi-supervised dimensionality reduction with EmbedSOM. You can use it to explore multidimen

19 Dec 14, 2022
Efficient face emotion recognition in photos and videos

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficien

Andrey Savchenko 239 Jan 04, 2023
Official PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection? (ICCV 2021), Dennis Park*, Rares Ambrus*, Vitor Guizilini, Jie Li, and Adrien Gaidon.

DD3D: "Is Pseudo-Lidar needed for Monocular 3D Object detection?" Install // Datasets // Experiments // Models // License // Reference Full video Offi

Toyota Research Institute - Machine Learning 364 Dec 27, 2022
MPViT:Multi-Path Vision Transformer for Dense Prediction

MPViT : Multi-Path Vision Transformer for Dense Prediction This repository inlcu

Youngwan Lee 272 Dec 20, 2022
A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization

University1652-Baseline [Paper] [Slide] [Explore Drone-view Data] [Explore Satellite-view Data] [Explore Street-view Data] [Video Sample] [中文介绍] This

Zhedong Zheng 335 Jan 06, 2023
End-to-end speech secognition toolkit

End-to-end speech secognition toolkit This is an E2E ASR toolkit modified from Espnet1 (version 0.9.9). This is the official implementation of paper:

Jinchuan Tian 147 Dec 28, 2022
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

1 Nov 10, 2021
Multi-Stage Progressive Image Restoration

Multi-Stage Progressive Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Sh

Syed Waqas Zamir 859 Dec 22, 2022
FB-tCNN for SSVEP Recognition

FB-tCNN for SSVEP Recognition Here are the codes of the tCNN and FB-tCNN in the paper "Filter Bank Convolutional Neural Network for Short Time-Window

Wenlong Ding 12 Dec 14, 2022
Code and description for my BSc Project, September 2021

BSc-Project Disclaimer: This repo consists of only the additional python scripts necessary to run the agent. To run the project on your own personal d

Matin Tavakoli 20 Jul 19, 2022
This tool converts a Nondeterministic Finite Automata (NFA) into a Deterministic Finite Automata (DFA)

This tool converts a Nondeterministic Finite Automata (NFA) into a Deterministic Finite Automata (DFA)

Quinn Herden 1 Feb 04, 2022
Simple embedding based text classifier inspired by fastText, implemented in tensorflow

FastText in Tensorflow This project is based on the ideas in Facebook's FastText but implemented in Tensorflow. However, it is not an exact replica of

Alan Patterson 306 Dec 02, 2022
PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods.

PEPit: Performance Estimation in Python This open source Python library provides a generic way to use PEP framework in Python. Performance estimation

Baptiste 53 Nov 16, 2022
The official GitHub repository for the Argoverse 2 dataset.

Argoverse 2 API Official GitHub repository for the Argoverse 2 family of datasets. If you have any questions or run into any problems with either the

Argo AI 156 Dec 23, 2022
An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Simon Niklaus 984 Dec 16, 2022
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the G|o

quantumlib 18 May 04, 2022
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

290 Dec 25, 2022
Converting CPT to bert form for use

cpt-encoder 将CPT转成bert形式使用 说明 刚刚刷到又出了一种模型:CPT,看论文显示,在很多中文任务上性能比mac bert还好,就迫不及待想把它用起来。 根据对源码的研究,发现该模型在做nlu建模时主要用的encoder部分,也就是bert,因此我将这部分权重转为bert权重类型

黄辉 1 Oct 14, 2021