As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

Overview

HAKE-Action

HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It includes reproduced SOTA models and their HAKE-enhanced versions. HAKE-Action is authored by Yong-Lu Li, Xinpeng Liu, Liang Xu, Cewu Lu. Currently, it is manintained by Yong-Lu Li, Xinpeng Liu and Liang Xu.

News: (2021.10.06) Our extended version of SymNet is accepted by TPAMI! Paper and code are coming soon.

(2021.2.7) Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]

Full demo: [YouTube], [bilibili]

(2021.1.15) Our extended version of TIN (Transferable Interactiveness Network) is accepted by TPAMI! New paper and code will be released soon.

(2020.10.27) The code of IDN (Paper) in NeurIPS'20 is released!

(2020.6.16) Our larger version HAKE-Large (>120K images, activity and part state labels) is released!

We released the HAKE-HICO (image-level part state labels upon HICO) and HAKE-HICO-DET (instance-level part state labels upon HICO-DET). The corresponding data can be found here: HAKE Data.

  • Paper is here.
  • More data and part states (e.g., upon AVA, more kinds of action categories, more rare actions...) are coming.
  • We will keep updating HAKE-Action to include more SOTA models and their HAKE-enhanced versions.

Data Mode

  • HAKE-HICO (PaStaNet* mode in paper): image-level, add the aggression of all part states in an image (belong to one or multiple active persons), compared with original HICO, the only additional labels are image-level human body part states.

  • HAKE-HICO-DET (PaStaNet* in paper): instance-level, add part states for each annotated persons of all images in HICO-DET, the only additional labels are instance-level human body part states.

  • HAKE-Large (PaStaNet in paper): contains more than 120K images, action labels and the corresponding part state labels. The images come from the existing action datasets and crowdsourcing. We mannully annotated all the active persons with our novel part-level semantics.

  • GT-HAKE (GT-PaStaNet* in paper): GT-HAKE-HICO and G-HAKE-HICO-DET. It means that we use the part state labels as the part state prediction. That is, we can perfectly estimate the body part states of a person. Then we use them to infer the instance activities. This mode can be seen as the upper bound of our HAKE-Action. From the results below we can find that, the upper bound is far beyond the SOTA performance. Thus, except for the current study on the conventional instance-level method, continue promoting part-level method based on HAKE would be a very promising direction.

Notion

Activity2Vec and PaSta-R are our part state based modules, which operate action inference based on part semantics, different from previous instance semantics. For example, Pairwise + HAKE-HICO pre-trained Activity2Vec + Linear PaSta-R (the seventh row) achieves 45.9 mAP on HICO. More details can be found in our CVPR2020 paper: PaStaNet: Toward Human Activity Knowledge Engine.

Code

The two versions of HAKE-Action are relesased in two branches of this repo:

Models on HICO

Instance-level +Activity2Vec +PaSta-R mAP [email protected] [email protected] [email protected]
R*CNN - - 28.5 - - -
Girdhar et.al. - - 34.6 - - -
Mallya et.al. - - 36.1 - - -
Pairwise - - 39.9 13.0 19.8 22.3
- HAKE-HICO Linear 44.5 26.9 30.0 30.7
Mallya et.al. HAKE-HICO Linear 45.0 26.5 29.1 30.3
Pairwise HAKE-HICO Linear 45.9 26.2 30.6 31.8
Pairwise HAKE-HICO MLP 45.6 26.0 30.8 31.9
Pairwise HAKE-HICO GCN 45.6 25.2 30.0 31.4
Pairwise HAKE-HICO Seq 45.9 25.3 30.2 31.6
Pairwise HAKE-HICO Tree 45.8 24.9 30.3 31.8
Pairwise HAKE-Large Linear 46.3 24.7 31.8 33.1
Pairwise HAKE-Large Linear 46.3 24.7 31.8 33.1
Pairwise GT-HAKE-HICO Linear 65.6 47.5 55.4 56.6

Models on HICO-DET

Using Object Detections from iCAN

Instance-level +Activity2Vec +PaSta-R Full(def) Rare(def) None-Rare(def) Full(ko) Rare(ko) None-Rare(ko)
iCAN - - 14.84 10.45 16.15 16.26 11.33 17.73
TIN - - 17.03 13.42 18.11 19.17 15.51 20.26
iCAN HAKE-HICO-DET Linear 19.61 17.29 20.30 22.10 20.46 22.59
TIN HAKE-HICO-DET Linear 22.12 20.19 22.69 24.06 22.19 24.62
TIN HAKE-Large Linear 22.65 21.17 23.09 24.53 23.00 24.99
TIN GT-HAKE-HICO-DET Linear 34.86 42.83 32.48 35.59 42.94 33.40

Models on AVA (Frame-based)

Method +Activity2Vec +PaSta-R mAP
AVA-TF-Baseline - - 11.4
LFB-Res-50-baseline - - 22.2
LFB-Res-101-baseline - - 23.3
AVA-TF-Baeline HAKE-Large Linear 15.6
LFB-Res-50-baseline HAKE-Large Linear 23.4
LFB-Res-101-baseline HAKE-Large Linear 24.3

Models on V-COCO

Method +Activity2Vec +PaSta-R AP(role), Scenario 1 AP(role), Scenario 2
iCAN - - 45.3 52.4
TIN - - 47.8 54.2
iCAN HAKE-Large Linear 49.2 55.6
TIN HAKE-Large Linear 51.0 57.5

Training Details

We first pre-train the Activity2Vec and PaSta-R with activities and PaSta labels. Then we change the last FC in PaSta-R to fit the activity categories of the target dataset. Finally, we freeze Activity2Vec and fine-tune PaSta-R on the train set of the target dataset. Here, HAKE works like the ImageNet and Activity2Vec is used as a pre-trained knowledge engine to promote other tasks.

Citation

If you find our work useful, please consider citing:

@inproceedings{li2020pastanet,
  title={PaStaNet: Toward Human Activity Knowledge Engine},
  author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
  booktitle={CVPR},
  year={2020}
}
@inproceedings{li2019transferable,
  title={Transferable Interactiveness Knowledge for Human-Object Interaction Detection},
  author={Li, Yong-Lu and Zhou, Siyuan and Huang, Xijie and Xu, Liang and Ma, Ze and Fang, Hao-Shu and Wang, Yanfeng and Lu, Cewu},
  booktitle={CVPR},
  year={2019}
}
@inproceedings{lu2018beyond,
  title={Beyond holistic object recognition: Enriching image understanding with part states},
  author={Lu, Cewu and Su, Hao and Li, Yonglu and Lu, Yongyi and Yi, Li and Tang, Chi-Keung and Guibas, Leonidas J},
  booktitle={CVPR},
  year={2018}
}

HAKE

HAKE[website] is a new large-scale knowledge base and engine for human activity understanding. HAKE provides elaborate and abundant body part state labels for active human instances in a large scale of images and videos. With HAKE, we boost the action understanding performance on widely-used human activity benchmarks. Now we are still enlarging and enriching it, and looking forward to working with outstanding researchers around the world on its applications and further improvements. If you have any pieces of advice or interests, please feel free to contact Yong-Lu Li ([email protected]).

If you get any problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

HAKE-Action is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail. We will send the detail agreement to you.

Owner
Yong-Lu Li
Ph.D. CV_Robotics
Yong-Lu Li
Code for "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search"

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search This is an implementation for our paper Contextual Non-Loca

Tencent YouTu Research 50 Dec 03, 2022
A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers.

ViTGAN: Training GANs with Vision Transformers A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers. Refer

Hong-Jia Chen 127 Dec 23, 2022
UMPNet: Universal Manipulation Policy Network for Articulated Objects

UMPNet: Universal Manipulation Policy Network for Articulated Objects Zhenjia Xu, Zhanpeng He, Shuran Song Columbia University Robotics and Automation

Columbia Artificial Intelligence and Robotics Lab 33 Dec 03, 2022
PaddleBoBo是基于PaddlePaddle和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目

PaddleBoBo - 元宇宙时代,你也可以动手做一个虚拟主播。 PaddleBoBo是基于飞桨PaddlePaddle深度学习框架和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目。PaddleBoBo致力于简单高效、可复用性强,只需要一张带人像的图片和一段文字,就能

502 Jan 08, 2023
M3DSSD: Monocular 3D Single Stage Object Detector

M3DSSD: Monocular 3D Single Stage Object Detector Setup pytorch 0.4.1 Preparation Download the full KITTI detection dataset. Then place a softlink (or

mumianyuxin 64 Dec 27, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

Facebook Research 94 Oct 26, 2022
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
Repository of 3D Object Detection with Pointformer (CVPR2021)

3D Object Detection with Pointformer This repository contains the code for the paper 3D Object Detection with Pointformer (CVPR 2021) [arXiv]. This wo

Zhuofan Xia 117 Jan 06, 2023
A repository for interferometer controller code.

dses-interferometer-controller A repository for interferometer controller code, hardware, and simulations. See dses.science for more information on th

Eli Reed 1 Jan 17, 2022
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

77 Dec 24, 2022
UniLM AI - Large-scale Self-supervised Pre-training across Tasks, Languages, and Modalities

Pre-trained (foundation) models across tasks (understanding, generation and translation), languages (100+ languages), and modalities (language, image, audio, vision + language, audio + language, etc.

Microsoft 7.6k Jan 01, 2023
Multi-robot collaborative exploration and mapping through Voronoi partition and DRL in unknown environment

Voronoi Multi_Robot Collaborate Exploration Introduction In the unknown environment, the cooperative exploration of multiple robots is completed by Vo

PeaceWord 6 Nov 22, 2022
Code for the paper Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations (AKBC 2021).

Relation Prediction as an Auxiliary Training Objective for Knowledge Base Completion This repo provides the code for the paper Relation Prediction as

Facebook Research 85 Jan 02, 2023
Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021)

UNITE and UNITE+ Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021) Unbalanced Intrinsic Feature Transport for Exemplar-bas

Fangneng Zhan 183 Nov 09, 2022
Code for the Image similarity challenge.

ISC 2021 This repository contains code for the Image Similarity Challenge 2021. Getting started The docs subdirectory has step-by-step instructions on

Facebook Research 173 Dec 12, 2022
Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach

This repository holds the implementation for paper Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach Download our preproc

Qitian Wu 42 Dec 27, 2022
Calibrate your listeners! Robust communication-based training for pragmatic speakers. Findings of EMNLP 2021.

Calibrate your listeners! Robust communication-based training for pragmatic speakers Rose E. Wang, Julia White, Jesse Mu, Noah D. Goodman Findings of

Rose E. Wang 3 Apr 02, 2022