Bottom-up Human Pose Estimation

Overview

Introduction

This is the official code of Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation. This paper has been accepted to CVPR2021.

This repo is built on Bottom-up-Higher-HRNet.

Main Results

Results on COCO val2017 without multi-scale test

Method Backbone Input size #Params GFLOPs AP Ap .5 AP .75 AP (M) AP (L)
HigherHRNet HRNet-w32 512 28.6M 47.9 67.1 86.2 73.0 61.5 76.1
HigherHRNet + SWAHR HRNet-w32 512 28.6M 48.0 68.9 87.8 74.9 63.0 77.4
HigherHRNet HRNet-w48 640 63.8M 154.3 69.9 87.2 76.1 65.4 76.4
HigherHRNet + SWAHR HRNet-w48 640 63.8M 154.6 70.8 88.5 76.8 66.3 77.4

Results on COCO val2017 with multi-scale test

Method Backbone Input size #Params GFLOPs AP Ap .5 AP .75 AP (M) AP (L)
HigherHRNet HRNet-w32 512 28.6M 47.9 69.9 87.1 76.0 65.3 77.0
HigherHRNet + SWAHR HRNet-w32 512 28.6M 48.0 71.4 88.9 77.8 66.3 78.9
HigherHRNet HRNet-w48 640 63.8M 154.3 72.1 88.4 78.2 67.8 78.3
HigherHRNet + SWAHR HRNet-w48 640 63.8M 154.6 73.2 89.8 79.1 69.1 79.3

Results on COCO test-dev2017 without multi-scale test

Method Backbone Input size #Params GFLOPs AP Ap .5 AP .75 AP (M) AP (L)
OpenPose* - - - - 61.8 84.9 67.5 57.1 68.2
Hourglass Hourglass 512 277.8M 206.9 56.6 81.8 61.8 49.8 67.0
PersonLab ResNet-152 1401 68.7M 405.5 66.5 88.0 72.6 62.4 72.3
PifPaf - - - - 66.7 - - 62.4 72.9
Bottom-up HRNet HRNet-w32 512 28.5M 38.9 64.1 86.3 70.4 57.4 73.9
HigherHRNet HRNet-w32 512 28.6M 47.9 66.4 87.5 72.8 61.2 74.2
HigherHRNet + SWAHR HRNet-w32 512 28.6M 48.0 67.9 88.9 74.5 62.4 75.5
HigherHRNet HRNet-w48 640 63.8M 154.3 68.4 88.2 75.1 64.4 74.2
HigherHRNet + SWAHR HRNet-w48 640 63.8M 154.6 70.2 89.9 76.9 65.2 77.0

Results on COCO test-dev2017 with multi-scale test

Method Backbone Input size #Params GFLOPs AP Ap .5 AP .75 AP (M) AP (L)
Hourglass Hourglass 512 277.8M 206.9 63.0 85.7 68.9 58.0 70.4
Hourglass* Hourglass 512 277.8M 206.9 65.5 86.8 72.3 60.6 72.6
PersonLab ResNet-152 1401 68.7M 405.5 68.7 89.0 75.4 64.1 75.5
HigherHRNet HRNet-w48 640 63.8M 154.3 70.5 89.3 77.2 66.6 75.8
HigherHRNet + SWAHR HRNet-w48 640 63.8M 154.6 72.0 90.7 78.8 67.8 77.7

Results on CrowdPose test

Method AP Ap .5 AP .75 AP (E) AP (M) AP (H)
Mask-RCNN 57.2 83.5 60.3 69.4 57.9 45.8
AlphaPose 61.0 81.3 66.0 71.2 61.4 51.1
SPPE 66.0. 84.2 71.5 75.5 66.3 57.4
OpenPose - - - 62.7 48.7 32.3
HigherHRNet 65.9 86.4 70.6 73.3 66.5 57.9
HigherHRNet + SWAHR 71.6 88.5 77.6 78.9 72.4 63.0
HigherHRNet* 67.6 87.4 72.6 75.8 68.1 58.9
HigherHRNet + SWAHR* 73.8 90.5 79.9 81.2 74.7 64.7

'*' indicates multi-scale test

Installation

The details about preparing the environment and datasets can be referred to README.md.

Downlaod our pretrained weights from BaidunYun(Password: 8weh) or GoogleDrive to ./models.

Training and Testing

Testing on COCO val2017 dataset using pretrained weights

For single-scale testing:

python tools/dist_valid.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pose_coco/pose_higher_hrnet_w32_512.pth

By default, we use horizontal flip. To test without flip:

python tools/dist_valid.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pose_coco/pose_higher_hrnet_w32_512.pth \
    TEST.FLIP_TEST False

Multi-scale testing is also supported, although we do not report results in our paper:

python tools/dist_valid.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pose_coco/pose_higher_hrnet_w32_512.pth \
    TEST.SCALE_FACTOR '[0.5, 1.0, 2.0]'

Training on COCO train2017 dataset

python tools/dist_train.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml 

By default, it will use all available GPUs on the machine for training. To specify GPUs, use

CUDA_VISIBLE_DEVICES=0,1 python tools/dist_train.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml 

Testing on your own images

python tools/dist_inference.py \
    --img_dir path/to/your/directory/of/images \
    --save_dir path/where/results/are/saved \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pose_coco/pose_higher_hrnet_w32_512.pth \
    TEST.SCALE_FACTOR '[0.5, 1.0, 2.0]'

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{LuoSWAHR,
  title={Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation},
  author={Zhengxiong Luo and Zhicheng Wang and Yan Huang and Liang Wang and Tieniu Tan and Erjin Zhou},
  booktitle={CVPR},
  year={2021}
}
This is the official Pytorch implementation of "Lung Segmentation from Chest X-rays using Variational Data Imputation", Raghavendra Selvan et al. 2020

README This is the official Pytorch implementation of "Lung Segmentation from Chest X-rays using Variational Data Imputation", Raghavendra Selvan et a

Raghav 42 Dec 15, 2022
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
Implementing a simplified copy of Shazam application from scratch using MinHashing and LSH.

Building Shazam from scratch In this repository we tried to implement a simplified copy of the Shazam application able to tell you the name of a song

Arturo Ghinassi 0 Nov 17, 2022
Learning to Draw: Emergent Communication through Sketching

Learning to Draw: Emergent Communication through Sketching This is the official code for the paper "Learning to Draw: Emergent Communication through S

19 Jul 22, 2022
The second project in Python course on FCC

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Denise T 1 Dec 13, 2021
maximal update parametrization (µP)

Maximal Update Parametrization (μP) and Hyperparameter Transfer (μTransfer) Paper link | Blog link In Tensor Programs V: Tuning Large Neural Networks

Microsoft 694 Jan 03, 2023
An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" in Pytorch.

GLOM An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" for MNIST Dataset. To understand this

50 Oct 19, 2022
Improving XGBoost survival analysis with embeddings and debiased estimators

xgbse: XGBoost Survival Embeddings "There are two cultures in the use of statistical modeling to reach conclusions from data

Loft 242 Dec 30, 2022
Generate indoor scenes with Transformers

SceneFormer: Indoor Scene Generation with Transformers Initial code release for the Sceneformer paper, contains models, train and test scripts for the

Chandan Yeshwanth 110 Dec 06, 2022
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022
Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces

This repository contains source code for the paper Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces a

9 Nov 21, 2022
Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021

MarioNette | Webpage | Paper | Video MarioNette: Self-Supervised Sprite Learning Dmitriy Smirnov, Michaël Gharbi, Matthew Fisher, Vitor Guizilini, Ale

Dima Smirnov 28 Nov 18, 2022
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models Jonathan Ho, Ajay Jain, Pieter Abbeel Paper: https://arxiv.org/abs/2006.11239 Website: https://hojonathanho.g

Jonathan Ho 1.5k Jan 08, 2023
Ultra-lightweight human body posture key point CNN model. ModelSize:2.3MB HUAWEI P40 NCNN benchmark: 6ms/img,

Ultralight-SimplePose Support NCNN mobile terminal deployment Based on MXNET(=1.5.1) GLUON(=0.7.0) framework Top-down strategy: The input image is t

223 Dec 27, 2022
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation

SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation SeqFormer SeqFormer: a Frustratingly Simple Model for Video Instance Segmentat

Junfeng Wu 298 Dec 22, 2022
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
Code and project page for ICCV 2021 paper "DisUnknown: Distilling Unknown Factors for Disentanglement Learning"

DisUnknown: Distilling Unknown Factors for Disentanglement Learning See introduction on our project page Requirements PyTorch = 1.8.0 torch.linalg.ei

Sitao Xiang 24 May 16, 2022
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intel ISL (Intel Intelligent Systems Lab) 1.3k Dec 28, 2022