3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

Overview

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

arXiv

Introduction

This repository contains the code and models for the following paper.

Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks
Cheng Yu, Bo Wang, Bo Yang, Robby T. Tan
Computer Vision and Pattern Recognition, CVPR 2021.

Overview of the proposed method:

Updates

  • 06/18/2021 evaluation code of PCK (person-centric) and PCK_abs (camera-centric), and pre-trained model for MuPoTS dataset tested and released

Installation

Dependencies

Pytorch >= 1.5
Python >= 3.6

Create an enviroment.

conda create -n 3dmpp python=3.6
conda activate 3dmpp

Install the latest version of pytorch (tested on pytorch 1.5 - 1.7) based on your OS and GPU driver installed following install pytorch. For example, command to use on Linux with CUDA 11.0 is like:

conda install pytorch torchvision cudatoolkit=11.0 -c pytorch

Install dependencies

pip install - r requirements.txt

Build the Fast Gaussian Map tool:

cd lib/fastgaus
python setup.py build_ext --inplace
cd ../..

Models and Testing Data

Pre-trained Models

Download the pre-trained model and processed human keypoint files here, and unzip the downloaded zip file to this project's root directory, two folders are expected to see after doing that (i.e., ./ckpts and ./mupots).

MuPoTS Dataset

MuPoTS eval set is needed to perform evaluation as the results reported in Table 3 in the main paper, which is available on the MuPoTS dataset website. You need to download the mupots-3d-eval.zip file, unzip it, and run get_mupots-3d.sh to download the dataset. After the download is complete, a MultiPersonTestSet.zip is avaiable, ~5.6 GB. Unzip it and move the folder MultiPersonTestSet to the root directory of the project to perform evaluation on MuPoTS test set. Now you should see the following directory structure.

${3D-Multi-Person-Pose_ROOT}
|-- ckpts              <-- the downloaded pre-trained Models
|-- lib
|-- MultiPersonTestSet <-- the newly added MuPoTS eval set
|-- mupots             <-- the downloaded processed human keypoint files
|-- util
|-- 3DMPP_framework.png
|-- calculate_mupots_btmup.py
|-- other python code, LICENSE, and README files
...

Usage

MuPoTS dataset evaluation

3D Multi-Person Pose Estimation Evaluation on MuPoTS Dataset

The following table is similar to Table 3 in the main paper, where the quantitative evaluations on MuPoTS-3D dataset are provided (best performance in bold). Evaluation instructions to reproduce the results (PCK and PCK_abs) are provided in the next section.

Group Methods PCK PCK_abs
Person-centric (relative 3D pose) Mehta et al., 3DV'18 65.0 N/A
Person-centric (relative 3D pose) Rogez et al., IEEE TPAMI'19 70.6 N/A
Person-centric (relative 3D pose) Mehta et al., ACM TOG'20 70.4 N/A
Person-centric (relative 3D pose) Cheng et al., ICCV'19 74.6 N/A
Person-centric (relative 3D pose) Cheng et al., AAAI'20 80.5 N/A
Camera-centric (absolute 3D pose) Moon et al., ICCV'19 82.5 31.8
Camera-centric (absolute 3D pose) Lin et al., ECCV'20 83.7 35.2
Camera-centric (absolute 3D pose) Zhen et al., ECCV'20 80.5 38.7
Camera-centric (absolute 3D pose) Li et al., ECCV'20 82.0 43.8
Camera-centric (absolute 3D pose) Cheng et al., AAAI'21 87.5 45.7
Camera-centric (absolute 3D pose) Our method 89.6 48.0

Run evaluation on MuPoTS dataset with estimated 2D joints as input

We split the whole pipeline into several separate steps to make it more clear for the users.

python calculate_mupots_topdown_pts.py
python calculate_mupots_topdown_depth.py
python calculate_mupots_btmup.py
python calculate_mupots_integrate.py

Please note that python calculate_mupots_btmup.py is going to take a while (30-40 minutes depending on your machine).

To evaluate the person-centric 3D multi-person pose estimation:

python eval_mupots_pck.py

After running the above code, the following PCK (person-centric, pelvis-based origin) value is expected, which matches the number reported in Table 3, PCK = 89 (percentage) in the paper.

...
Seq: 18
Seq: 19
Seq: 20
PCK_MEAN: 0.8994453169938017

To evaluate camera-centric (i.e., camera coordinates) 3D multi-person pose estimation:

python eval_mupots_pck_abs.py

After running the above code, the following PCK_abs (camera-centric) value is expected, which matches the number reported in Table 3, PCK_abs = 48 (percentage) in the paper.

...
Seq: 18
Seq: 19
Seq: 20
PCK_MEAN: 0.48514110933606175

License

The code is released under the MIT license. See LICENSE for details.

Citation

If this work is useful for your research, please cite our paper.

@InProceedings{Cheng_2021_CVPR,
    author    = {Cheng, Yu and Wang, Bo and Yang, Bo and Tan, Robby T.},
    title     = {Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {7649-7659}
}
ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation This repository contains the source code of our paper, ESPNet (acc

Sachin Mehta 515 Dec 13, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 01, 2023
Rational Activation Functions - Replacing Padé Activation Units

Rational Activations - Learnable Rational Activation Functions First introduce as PAU in Padé Activation Units: End-to-end Learning of Activation Func

<a href=[email protected]"> 38 Nov 22, 2022
Computational inteligence project on faces in the wild dataset

Table of Contents The general idea How these scripts work? Loading data Needed modules and global variables Parsing the arrays in dataset Extracting a

tooraj taraz 4 Oct 21, 2022
This is a GUI interface which can process forest fire detection, smoke detection and fire segmentation

This is a GUI interface which can process forest fire detection, smoke detection and fire segmentation. Yolov5 is used to detect fire and smoke and unet is used to segment fire.

7 Jan 08, 2023
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 01, 2023
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 02, 2022
Decorators for maximizing memory utilization with PyTorch & CUDA

torch-max-mem This package provides decorators for memory utilization maximization with PyTorch and CUDA by starting with a maximum parameter size and

Max Berrendorf 10 May 02, 2022
a reimplementation of Optical Flow Estimation using a Spatial Pyramid Network in PyTorch

pytorch-spynet This is a personal reimplementation of SPyNet [1] using PyTorch. Should you be making use of this work, please cite the paper according

Simon Niklaus 269 Jan 02, 2023
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
Reinforcement Learning for finance

Reinforcement Learning for Finance We apply reinforcement learning for stock trading. Fetch Data Example import utils # fetch symbols from yahoo fina

Tomoaki Fujii 159 Jan 03, 2023
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
Code for GNMR in ICDE 2021

GNMR Code for GNMR in ICDE 2021 Please unzip data files in Datasets/MultiInt-ML10M first. Run labcode_preSamp.py (with graph sampling) for ECommerce-c

7 Oct 27, 2022
Creating multimodal multitask models

Fusion Brain Challenge The English version of the document can be found here. Обновления 01.11 Мы выкладываем пример данных, аналогичных private test

Sber AI 43 Nov 28, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
[ACM MM 2021] TSA-Net: Tube Self-Attention Network for Action Quality Assessment

Tube Self-Attention Network (TSA-Net) This repository contains the PyTorch implementation for paper TSA-Net: Tube Self-Attention Network for Action Qu

ShunliWang 18 Dec 23, 2022
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 03, 2023
Official PyTorch Implementation for "Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes"

PVDNet: Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes This repository contains the official PyTorch implementatio

Junyong Lee 98 Nov 06, 2022
Data Preparation, Processing, and Visualization for MoVi Data

MoVi-Toolbox Data Preparation, Processing, and Visualization for MoVi Data, https://www.biomotionlab.ca/movi/ MoVi is a large multipurpose dataset of

Saeed Ghorbani 51 Nov 27, 2022