CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Related tags

Deep LearningCPF
Overview

Contact Potential Field

This repo contains model, demo, and test codes of our paper: CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Guide to the Demo

1. Get our code:

$ git clone --recursive https://github.com/lixiny/CPF.git
$ cd CPF

2. Set up your new environment:

$ conda env create -f environment.yaml
$ conda activate cpf

3. Download assets files and put it in assets folder.

Download the MANO model files from official MANO website, and put it into assets/mano. We currently only use the MANO_RIGHT.pkl

Now your assets folder should look like this:

.
├── anchor/
│   ├── anchor_mapping_path.pkl
│   ├── anchor_weight.txt
│   ├── face_vertex_idx.txt
│   └── merged_vertex_assignment.txt
├── closed_hand/
│   └── hand_mesh_close.obj
├── fhbhands_fits/
│   ├── Subject_1/
│   │   ├── ...
│   ├── Subject_2/
|   ├── ...
├── hand_palm_full.txt
└── mano/
    ├── fhb_skel_centeridx9.pkl
    ├── info.txt
    ├── LICENSE.txt
    └── MANO_RIGHT.pkl

4. Download Dataset

First-Person Hand Action Benchmark (fhb)

Download and unzip the First-Person Hand Action Benchmark dataset following the official instructions to the data/fhbhands folder If everything is correct, your data/fhbhands should look like this:

.
├── action_object_info.txt
├── action_sequences_normalized/
├── change_log.txt
├── data_split_action_recognition.txt
├── file_system.jpg
├── Hand_pose_annotation_v1/
├── Object_6D_pose_annotation_v1_1/
├── Object_models/
├── Subjects_info/
├── Video_files/
├── Video_files_480/ # Optionally

Optionally, resize the images (speeds up training !) based on the handobjectconsist/reduce_fphab.py.

$ python reduce_fphab.py

Download our fhbhands_supp and place it at data/fhbhands_supp:

Download our fhbhands_example and place it at data/fhbhands_example. This fhbhands_example contains 10 samples that are designed to demonstrate our pipeline.

├── fhbhands/
├── fhbhands_supp/
│   ├── Object_models/
│   └── Object_models_binvox/
├── fhbhands_example/
│   ├── annotations/
│   ├── images/
│   ├── object_models/
│   └── sample_list.txt

HO3D

Download and unzip the HO3D dataset following the official instructions to the data/HO3D folder. if everything is correct, the HO3D & YCB folder in your data should look like this:

data/
├── HO3D/
│   ├── evaluation/
│   ├── evaluation.txt
│   ├── train/
│   └── train.txt
├── YCB_models/
│   ├── 002_master_chef_can/
│   ├── ...

Download our YCB_models_supp and place it at data/YCB_models_supp

Now the data folder should have a root structure like:

data/
├── fhbhands/
├── fhbhands_supp/
├── fhbhands_example/
├── HO3D/
├── YCB_models/
├── YCB_models_supp/

5. Download pre-trained checkpoints

download our pre-trained CPF_checkpoints, unzip it at the CPF_checkpoints folder:

CPF_checkpoints/
├── honet/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/
├── picr/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/

6. Launch visualization

We create a FHBExample dataset in hocontact/hodatasets/fhb_example.py that only contains 10 samples to demonstrate our pipeline. Notice: this demo requires active screen for visualizing. Press q in the "runtime hand" window to start fitting.

$ python training/run_demo.py \
    --gpu 0 \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar \
    --honet_mano_fhb_hand

7. Test on full dataset (FHB, HO3D v1/v2)

We provide shell srcipts to test on the full dataset to approximately reproduce our results.

FHB

dump the results of HoNet and PiCR:

$ python training/dumppicr_dist.py \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12355 \
    --exp_keyword fhb \
    --train_datasets fhb \
    --train_splits train \
    --val_dataset fhb \
    --val_split test \
    --split_mode actions \
    --batch_size 8 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar

and reload the GeO optimizer:

# setting 1: hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand

# setting 2: hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand_obj \
    --compensate_tsl

HO3Dv1

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dv1 \
    --train_datasets ho3d \
    --train_splits train \
    --val_dataset ho3d \
    --val_split test \
    --split_mode objects \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dv1 \
    --init_ckpt CPF_checkpoints/picr/ho3dv1/checkpoint_300.pth.tar

and reload optimizer:

# hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 4.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand

# hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500  \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 6.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

HO3Dofficial

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dofficial \
    --train_datasets ho3d \
    --train_splits val \
    --val_dataset ho3d \
    --val_split test \
    --split_mode official \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --test_dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dofficial \
    --init_ckpt CPF_checkpoints/picr/ho3dofficial/checkpoint_300.pth.tar

and reload optimizer:

$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dofficial/HO3D/test_official_mf1_likev1_fct\(x\)_ec/  \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 2.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

Results

Testing on the full dataset may take a while ( 0.5 ~ 1.5 day ), thus we also provide our test results at fitting_res.txt.

K-MANO

We provide pytorch implementation of our Kinematic-chained MANO in lixiny/manopth, which is modified from the original hassony2/manopth. Thank Yana Hasson for providing the code.

Citation

If you find this work helpful, please consider citing us:

@article{yang2020cpf,
  title={CPF: Learning a Contact Potential Field to Model the Hand-object Interaction},
  author={Yang, Lixin and Zhan, Xinyu and Li, Kailin and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
  journal={arXiv preprint arXiv:2012.00924},
  year={2020}
}

And if you have any question or suggestion, do not hesitate to contact me through siriusyang[at]sjtu[dot]edu[dot]cn.

Comments
  • FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    I executed this command: python training/run_demo.py --gpu 0 --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar --honet_mano_fhb_hand

    image

    So, I moved assets/mano folder to the path CPF/manopth/mano/webuser/ But, I am still getting the error

    opened by anjugopinath 3
  •  AttributeError: 'ParsedRequirement' object has no attribute 'req'

    AttributeError: 'ParsedRequirement' object has no attribute 'req'

    Could you tell me which version of Anaconda to use please? I am getting the below error:

    neptune:/s/red/a/nobackup/vision/anju/CPF$ conda env create -f environment.yaml Collecting package metadata (repodata.json): done Solving environment: done

    ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.1

    Please update conda by running

    $ conda update -n base -c defaults conda
    

    Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: | Ran pip subprocess with arguments: ['/s/chopin/a/grad/anju/.conda/envs/cpf/bin/python', '-m', 'pip', 'install', '-U', '-r', '/s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt'] Pip subprocess output: Collecting git+https://github.com/utiasSTARS/liegroups.git (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 1)) Cloning https://github.com/utiasSTARS/liegroups.git to /tmp/pip-req-build-ey_prxpa Obtaining file:///s/red/a/nobackup/vision/anju/CPF/manopth (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 12)) Obtaining file:///s/red/a/nobackup/vision/anju/CPF (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 13)) Collecting trimesh==3.8.10 Using cached trimesh-3.8.10-py3-none-any.whl (625 kB) Collecting open3d==0.10.0.0 Using cached open3d-0.10.0.0-cp38-cp38-manylinux1_x86_64.whl (4.7 MB) Collecting pyrender==0.1.43 Using cached pyrender-0.1.43-py3-none-any.whl (1.2 MB) Collecting scikit-learn==0.23.2 Using cached scikit_learn-0.23.2-cp38-cp38-manylinux1_x86_64.whl (6.8 MB) Collecting chumpy==0.69 Using cached chumpy-0.69.tar.gz (50 kB)

    Pip subprocess error: Running command git clone -q https://github.com/utiasSTARS/liegroups.git /tmp/pip-req-build-ey_prxpa ERROR: Command errored out with exit status 1: command: /s/chopin/a/grad/anju/.conda/envs/cpf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"'; file='"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-k7bp5gq7 cwd: /tmp/pip-install-hnf78qhk/chumpy/ Complete output (7 lines): Traceback (most recent call last): File "", line 1, in File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] AttributeError: 'ParsedRequirement' object has no attribute 'req' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

    failed

    CondaEnvException: Pip failed

    opened by anjugopinath 3
  • How to use CPF on both hands?

    How to use CPF on both hands?

    Thanks a lot for your great work! I have a question: Since you only use the MANO_RIGHT.pkl, it seems that CPF currently can only construct right hand model, right? What is needed to be modified to use CPF on both hands? Thanks!

    opened by buaacyw 3
  • Error when executing command

    Error when executing command "conda env create -f environment.yaml"

    Hi,

    I get the below error when executing the command "conda env create -f environment.yaml"

    CondaError: Downloaded bytes did not match Content-Length url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 target_path: /home/anju/anaconda3/pkgs/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 Content-Length: 564734769 downloaded bytes: 221675180

    opened by anjugopinath 1
  • Some questions about PiQR code

    Some questions about PiQR code

    In the contacthead.py, the three decoders have different input dimension. self.vertex_contact_decoder = PointNetDecodeModule(self._concat_feat_dim, 1) self.contact_region_decoder = PointNetDecodeModule(self._concat_feat_dim + 1, self.n_region) self.anchor_elasti_decoder = PointNetDecodeModule(self._concat_feat_dim + 17, self.n_anchor)

    I am wondering if this part is used to predict selected anchor points within each subregion.

    The classification of subregions is obtained by contact_region_decoder and then the anchor points are predicted by anchor_elasti_decoder, is it right ?

    I am a little bit confused about it, because according to the paper, Anchor Elasticity (AE) represents the elasticities of the attractive springs. But in the code, the output of anchor_elasti_decoder has no relation to the elasticity parameter, I'm wondering if there's some part I've missed.

    Sorry for any trouble caused and thanks for your help!

    opened by lym29 0
  • what's the meaning of

    what's the meaning of "adapt"?

    I notice that there are hand_pose_axisang_adapt_np and hand_pose_axisang_np in your code. Could you please explain what's the difference between them?

    opened by Yamato-01 5
  • Expected code date ?

    Expected code date ?

    Hi !

    I just read through your paper, congratulation on the great work ! I love the fact that you provide an anatomically-constrained MANO, and the per-object-vertex hand part affinity.

    I look forward to the code realease :)

    Do you have a planned date in mind ?

    All the best,

    Yana

    opened by hassony2 4
Releases(v1.0.0)
Owner
Lixin YANG
PhD student @ SJTU. Computer Vision, Robotic Vision and Hand-obj Interaction
Lixin YANG
Hunt down social media accounts by username across social networks

Hunt down social media accounts by username across social networks Installation | Usage | Docker Notes | Contributing Installation # clone the repo $

1 Dec 14, 2021
Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks

Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks This is our Pytorch implementation for the paper: Zirui Zhu, Chen Gao, Xu C

Zirui Zhu 3 Dec 30, 2022
More than a hundred strange attractors

dysts Analyze more than a hundred chaotic systems. Basic Usage Import a model and run a simulation with default initial conditions and parameter value

William Gilpin 185 Dec 23, 2022
Rohit Ingole 2 Mar 24, 2022
FS2KToolbox FS2K Dataset Towards the translation between Face

FS2KToolbox FS2K Dataset Towards the translation between Face -- Sketch. Download (photo+sketch+annotation): Google-drive, Baidu-disk, pw: FS2K. For

Deng-Ping Fan 5 Jan 03, 2023
A deep neural networks for images using CNN algorithm.

Example-CNN-Project This is a simple project showing how to implement deep neural networks using CNN algorithm. The dataset is taken from this link: h

Mohammad Amin Dadgar 3 Sep 16, 2022
Elevation Mapping on GPU.

Elevation Mapping cupy Overview This is a ros package of elevation mapping on GPU. Code are written in python and uses cupy for GPU calculation. * pla

Robotic Systems Lab - Legged Robotics at ETH Zürich 183 Dec 19, 2022
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

79 Dec 27, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
Tensor-Based Quantum Machine Learning

TensorLy_Quantum TensorLy-Quantum is a Python library for Tensor-Based Quantum Machine Learning that builds on top of TensorLy and PyTorch. Website: h

TensorLy 85 Dec 03, 2022
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
A voice recognition assistant similar to amazon alexa, siri and google assistant.

kenyan-Siri Build an Artificial Assistant Full tutorial (video) To watch the tutorial, click on the image below Installation For windows users (run th

Alison Parker 3 Aug 19, 2022
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 08, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 107 Nov 18, 2022
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
An image classification app boilerplate to serve your deep learning models asap!

Image 🖼 Classification App Boilerplate Have you been puzzled by tons of videos, blogs and other resources on the internet and don't know where and ho

Smaranjit Ghose 27 Oct 06, 2022
Datasets and pretrained Models for StyleGAN3 ...

Datasets and pretrained Models for StyleGAN3 ... Dear arfiticial friend, this is a collection of artistic datasets and models that we have put togethe

lucid layers 34 Oct 06, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023
Anomaly detection analysis and labeling tool, specifically for multiple time series (one time series per category)

taganomaly Anomaly detection labeling tool, specifically for multiple time series (one time series per category). Taganomaly is a tool for creating la

Microsoft 272 Dec 17, 2022