GAN-based 3D human pose estimation model for 3DV'17 paper

Overview

Tensorflow implementation for 3DV 2017 conference paper "Adversarially Parameterized Optimization for 3D Human Pose Estimation".

@inproceedings{jack2017adversarially,
  title={Adversarially Parameterized Optimization for 3D Human Pose Estimation},
  author={Jack, Dominic and Maire, Frederic and Eriksson, Anders and Shirazi, Sareh},
  booktitle={3D Vision (3DV), 2017 Fifth International Conference on},
  year={2017},
  organization={IEEE}
}

Code used to generate results for the paper has been frozen and can be found in the 3dv2017 branch. Bug fixes and extensions will be applied to other branches.

Algorithm Overview

The premise of the paper is to train a GAN to simultaneously learn a parameterization of the feasible human pose space along with a feasibility loss function.

During inference, a standard off-the-shelf optimizer infers all poses from sequence almost-independently (the scale is shared between frames, which has no effect on the results (since errors are on the procruste-aligned inferences which optimize over scale) but makes the visualizations easier to interpret).

Repository Structure

Each GAN is identified by a gan_id. Hyperparameters defining the network structures and datasets from which they should be trained are specified in gan_params/gan_id.json. A couple (those with results highlighted in the paper) are provided, h3m_big, h3m_small and eva_big. Note that compared to typical neural networks, these are still tiny, so the difference in size should result in a negligible difference in training/inference time.

Similarly, each inference run is identified by an inference_id, the parameters of which are defined in inference_params/inference_id.json. including geometric transforms, visualizations and dataset reading

  • gan: provides application-specific GANs based on specifications in gan_params
  • serialization.py: i/o related functions for loading hyper-parameters/results

Scripts:

  • train.py: Trains a GAN specified by a json file in gan_params
  • gan_generator_vis.py: visualization script for a trained GAN generator
  • interactive_gan_generator_vis.ipynb: interactive jupyter/ipython notebook for visualizing a trained GAN generator
  • generate_inferences.py: Generates inferences based on parameters specified by a json file in inference_params
  • h3m_report.py/eva_report.py: reporting scripts for generated inferences.
  • vis_sequecne.py: visualization script for entire inferred sequence.

Usage

  1. Setup the external repositories: * human_pose_util
  2. Clone this repository and add the location and the parent directory(s) to your PYTHONPATH
cd path/to/parent_folder
git clone https://github.com/jackd/adversarially_parameterized_optimization.git
git clone https://github.com/jackd/human_pose_util.git
export PYTHONPATH=/path/to/parent_folder:$PYTHONPATH
cd adversarially_parameterized_optimization
  1. Define a GAN model by creating a gan_params/gan_id.json file, or select one of the existing ones.
  2. Setup the relevant dataset(s) or create your own as described in human_pose_util.
  3. Train the GAN
python train.py gan_id --max_steps=1e7

Our experiments were conducted on an NVidia K620 Quadro GPU with 2GB memory. Training runs at ~600 batches per second with a batch size of 128. For 10 million steps (likely excessive) this takes around 4.5 hours.

View training progress and compare different runs using tensorboard:

tensorboard --logdir=models
  1. (Optional) Check your generator is behaving well by running gan_generator_vis.py model_id or interactively by running interactive_gan_generator_vis.ipynb and modifying the model_id.
  2. Define an inference specification by creating an inference_params/inference_id.json file, or select one of the defaults provided.
  3. Generate inference
python generate_inferences.py inference_id

Sequence optimization runs at ~5-10fps (speed-up compared to 1fps reported in paper due to reimplementation efficiencies rather than different ideas).

This will save results in results.hdf5 in the inference_id group. 9. See the results! * h3m_report.py or eva_report.py depending on the dataset gives qualitative results

python report.py eval_id
* `vis_sequence.py` visualizes inferences

Note that results are quite unstable with respect to GAN training. You may get considerably different quantitative results than those published in the paper, though qualitative behaviour should be similar.

Serialization

To aid with experiments with different parameter sets, model/inference parameters are saved in json for ease of parsing and human readability. To allow for extensibility, human_pose_util maintains registers for different datasets and skeletons.

See the README for details on setting up/preprocessing of datasets or implementing your own.

The scripts in this project register some default h3m/eva datasets using register_defaults. While normally fast, some data conversion is performed the first time this function is run for each dataset and requires the original datasets be available with paths defined (see below). If you only wish to experiment with one dataset -- e.g. h3m -- modify the default argument values for register_defaults, e.g. def register_defaults(h3m=True, eva=False): (or the relevant function calls).

If you implement your own datasets/skeletons, either add their registrations to the default functions, or edit the relevant scripts to register them manually.

Datasets

See human_pose_util repository for instructions for setting up datasets.

Requirements

For training/inference:

  • tensorflow 1.4
  • numpy
  • h5py For visualizations:
  • matplotlib
  • glumpy (install from source may reduce issues) For initial human 3.6m dataset transformations:
  • spacepy (for initial human 3.6m dataset conversion to hdf5)

Development

This branch will be actively maintained, updated and extended. For code used to generate results for the publication, see the 3dv2017 branch.

Contact

Please report any issues/bugs. Feature requests in this repository will largely be ignored, but will be considered if made in independent repositories.

Email contact to discuss ideas/collaborations welcome: [email protected].

Owner
Dominic Jack
Deep Learning / Cybsecurity Researcher
Dominic Jack
EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures

SCICAP: Scientific Figures Dataset This is the Github repo of the EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures (Hsu

Edward 26 Nov 21, 2022
Self-driving car env with PPO algorithm from stable baseline3

Self-driving car with RL stable baseline3 Most of the project develop from https://github.com/GerardMaggiolino/Gym-Medium-Post Please check it out! Th

Sornsiri.P 7 Dec 22, 2022
Conditional Generative Adversarial Networks (CGAN) for Mobility Data Fusion

This code implements the paper, Kim et al. (2021). Imputing Qualitative Attributes for Trip Chains Extracted from Smart Card Data Using a Conditional Generative Adversarial Network. Transportation Re

Eui-Jin Kim 2 Feb 03, 2022
Sequence lineage information extracted from RKI sequence data repo

Pango lineage information for German SARS-CoV-2 sequences This repository contains a join of the metadata and pango lineage tables of all German SARS-

Cornelius Roemer 24 Oct 26, 2022
This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine

LSHTM_RCS This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine (LSHTM) in collabo

Lukas Kopecky 3 Jan 30, 2022
This dlib-based facial login system

Facial-Login-System This dlib-based facial login system is a technology capable of matching a human face from a digital webcam frame capture against a

Mushahid Ali 3 Apr 23, 2022
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP"

DiLBERT Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP" Pretrained Model The pretrained model presented in the paper is

Kevin Roitero 2 Dec 15, 2022
Unsupervised Attributed Multiplex Network Embedding (AAAI 2020)

Unsupervised Attributed Multiplex Network Embedding (DMGI) Overview Nodes in a multiplex network are connected by multiple types of relations. However

Chanyoung Park 114 Dec 06, 2022
AgML is a comprehensive library for agricultural machine learning

AgML is a comprehensive library for agricultural machine learning. Currently, AgML provides access to a wealth of public agricultural datasets for common agricultural deep learning tasks.

Plant AI and Biophysics Lab 1 Jul 07, 2022
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (चित्र) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Aniket Maurya 210 Dec 21, 2022
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features

CleanRL (Clean Implementation of RL Algorithms) CleanRL is a Deep Reinforcement Learning library that provides high-quality single-file implementation

Costa Huang 1.8k Jan 01, 2023
Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback This is our Pytorch implementation for the paper: Yinwei Wei,

17 Jun 10, 2022
SpeechNAS Better Trade off between Latency and Accuracy for Large Scale Speaker Verification

SpeechNAS Better Trade off between Latency and Accuracy for Large Scale Speaker Verification

Wentao Zhu 24 May 20, 2022
Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions

APSIPA-SER-with-A-and-T This code is the implementation of Speech Emotion Recognition (SER) with acoustic and linguistic features. The network model i

kenro515 3 Jan 04, 2023
Benchmark for evaluating open-ended generation

OpenMEVA Contributed by Jian Guan, Zhexin Zhang. Thank Jiaxin Wen for DeBugging. OpenMEVA is a benchmark for evaluating open-ended story generation me

25 Nov 15, 2022
Hi Guys, here I am providing examples, which will help you in Lerarning Python

LearningPython Hi guys, here I am trying to include as many practice examples of Python Language, as i Myself learn, and hope these will help you in t

4 Feb 03, 2022
PyTorch implementation of "VRT: A Video Restoration Transformer"

VRT: A Video Restoration Transformer Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc Van Gool Computer

Jingyun Liang 837 Jan 09, 2023
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
LeetCode Solutions https://t.me/tenvlad

leetcode LeetCode Solutions groupped by common patterns YouTube: https://www.youtube.com/c/vladten Telegram: https://t.me/nilinterface Problems source

Vlad Ten 158 Dec 29, 2022