A Comparative Review of Recent Kinect-Based Action Recognition Algorithms (TIP2020, Matlab codes)

Related tags

Deep LearningHDG
Overview

A Comparative Review of Recent Kinect-Based Action Recognition Algorithms

This repo contains:

  • the HDG implementation (Matlab codes) for 'Analysis and Evaluation of Kinect-based Action Recognition Algorithms', and
  • provides the links (google drive) for downloading the algorithms evaluated in our TIP journal and
  • provides direct links (google drive) to download 5 smaller datasets for action recognition research.

1 Introduction

This repository contains the implementation of HDG presented in the following paper:

[1] Lei Wang, 2017. Analysis and Evaluation of Kinect-based Action Recognition Algorithms. Master's thesis. School of Computer Science and Software Engineering, The University of Western Australia. [ArXiv] [BibTex]

[2] Lei Wang, Du Q. Huynh, and Piotr Koniusz. A Comparative Review of Recent Kinect-Based Action Recognition Algorithms. IEEE Transactions on Image Processing, 29: 15-28, 2020. [ArXiv] [BibTex]

We also provide the links for downloading the algorithms/datasets used in our TIP paper.

2 Other algorithms compared in TIP paper

You can download other algorithms we evaluated in TIP paper from the following links:

3 Datasets used in TIP paper

3.1 Five Smaller datasets

3.1.1 Depth+Skeleton

You can directly download the depth+skeleton sequences for the following smaller datasets here:

The above 5 downloaded datasets contain depth + skeleton data, which you can directly use for HDG algorithm in this repo:

  • unzip a dataset, and
  • put the Dataset folder into HDG folder, then
  • extract the features (refer to following sections for more details).

3.1.2 Depth video only

For downloading the UWA3DActivity+UWA3D Multiview Activity II depth only, you can use this link(extraction code: 172h).

For downloading the CAD-60 depth only, please use this link (extraction code: 36wt)

3.2 Big datasets (NTU RGB+D)

For big datasets such as NTU-60 and NTU-120, please refer to this link for the request to download.

4 Run the codes of HDG

This is an implementation based on Rahmani et al.’s paper β€˜Real Time Action Recognition Using Histograms of Depth Gradients and Random Decision Forests’ (WACV2014).

To run our new HDG algorithm (which is analysed and compared in our TIP2020 paper):

4.0 A glance of skeleton configuration

To know more detailed information about the skeleton configuration/graph, please refer to the pdf file attached in this repo.

UWAS denotes the skeleton configuration for UWA3D Activity, and UWAW is for UWA3D Multiview Activity II.

4.1 Data preparation

  • Go to the 'Dataset' folder, then go to the 'depth' folder and copy all depth sequence in this folder (should be .mat format and the internal data has the same name 'inDepthVideo').

  • After that go to the 'skeleton' folder, copy all skeleton sequence (the skeleton sequence should also be .mat format and each skeleton sequence has the following dimension: #jointsx3x#frames, here 3 represents x, y and d respectively), the internal data has the same name 'skeletonsequence'.

4.2 Feature extraction and concatenation

  • Go to the 'MATLAB_Codes' folder, run each 'main' in each algorithm folder(in the order of 00, 01, 02 and 03), and then run 'main' in 'feature_concatenating'. You can also run '02' and '03' first and then run '00' and '01', since '00' may need more time for segmenting the foreground (around 6 hours) and '01' is based on the results of '00'.

  • For UWAMultiview dataset, remember to change the video sequence from uint16 to double using im2double before running each main in 00 and 01: in both 00 and 01 folders, in main function line 33 & 17, change depthsequence=actionvolume; to depthsequence=im2double(actionvolume);.

  • For feature concatenating, you can select different combinations of features for classification. There are four features, which are:

    • hod(histogram of depth),
    • hodg(histogram of depth gradients),
    • jmv(joint movement volume features) and
    • jpd(joint position differences features).
  • Remember to change the number of joints and the torso joint ID in the 'main' of '02' and '03' since different datasets have different number of joints and torso joint IDs (refer to the pdf attached in this repo for the skeleton configuration).

    • MSRPairs (3D Action Pairs): 20 joints, torso joint ID is '2';
    • MSRAction3D: 20 joints, torso joint ID is '4';
    • CAD-60: 15 joints, torso joint ID is '3';
    • UWA3D single view dataset (UWA3D Activity): 15 joints, torso joint ID is '9';
    • UWA3D multi view dataset (UWA3D Multiview Activity II): 15 joints, torso joint ID is '3';

4.3 Classification

  • Run 'main' of random decision forests (Lei uses different 'main' for different datasets since different datasets should have different training and testing datasets). In Lei's implementation, half of data are used for training and the remaining half for testing.

    • MSRPairs (3D Action Pairs): msrpairsmain.m
    • MSRAction3D: msr3dmain.m
    • CAD-60: cadmain.m
    • UWA3D single view (UWA3D Activity): uwasinglemain.m
    • UWA3D multi view (UWA3D Multiview Activity II): uwamultimain.m

4.4 Visualization (i.e., confusion matrix)

  • The results of the confusion matrix will be saved in the 'Results' folder, and the confusion matrix will be displayed. Moreover, the total accuracy will appear in the workspace of the MATLAB.

4.4.1 Save figures to pdf format

  • saveTightFigure function is downloaded from online resource, which can be used to save the confusion matrix plot as pdf files. The use of this function is, for example: saveTightFigure(gcf, 'uwamultiview.pdf');

Codes for parameters evaluation, and running over all possible combinations of selecting half subjects (for training) are not provided in this repo.

For more information, please refer to my research report and our journal paper, or contact me.

5 Citations

You can cite the following papers for the use of this work:

@mastersthesis{lei_thesis_2017,
  author       = {Lei Wang}, 
  title        = {Analysis and Evaluation of {K}inect-based Action Recognition Algorithms},
  school       = {School of the Computer Science and Software Engineering, The University of Western Australia},
  year         = 2017,
  month        = {Nov}
}
@article{lei_tip_2019,
author={Lei Wang and Du Q. Huynh and Piotr Koniusz},
journal={IEEE Transactions on Image Processing},
title={A Comparative Review of Recent Kinect-Based Action Recognition Algorithms},
year={2020},
volume={29},
number={},
pages={15-28},
doi={10.1109/TIP.2019.2925285},
ISSN={1941-0042},
month={},}

Acknowledgments

I am grateful to Associate Professor Du Huynh for her valuable suggestions and discussions. We would like to thank the authors of HON4D, HOPC, LARP-SO, HPM+TM, IndRNN and ST-GCN for making their codes publicly available. We thank the ROSE Lab of Nanyang Technological University(NTU), Singapore, for making the NTU RGB+D dataset freely accessible.

Owner
Lei Wang
PhD student, Machine Learning/Computer Vision Researcher
Lei Wang
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

LinkBERT: A Knowledgeable Language Model Pretrained with Document Links This repo provides the model, code & data of our paper: LinkBERT: Pretraining

Michihiro Yasunaga 264 Jan 01, 2023
we propose a novel deep network, named feature aggregation and refinement network (FARNet), for the automatic detection of anatomical landmarks.

Feature Aggregation and Refinement Network for 2D Anatomical Landmark Detection Overview Localization of anatomical landmarks is essential for clinica

aoyueyuan 0 Aug 28, 2022
MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction This is the official implementation for the ICCV 2021 paper Learning Sign

110 Dec 20, 2022
FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

HKBU High Performance Machine Learning Lab 6 Nov 18, 2022
Recursive Bayesian Networks

Recursive Bayesian Networks This repository contains the code to reproduce the results from the NeurIPS 2021 paper Lieck R, Rohrmeier M (2021) Recursi

Robert Lieck 11 Oct 18, 2022
A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Facebook Research 536 Jan 06, 2023
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

rethink-audio-fsl This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021) Table

Yu Wang 34 Dec 24, 2022
YOLOv2 in PyTorch

YOLOv2 in PyTorch NOTE: This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0). This is a PyTorch implement

Long Chen 1.5k Jan 02, 2023
Yolo algorithm for detection + centroid tracker to track vehicles

Vehicle Tracking using Centroid tracker Algorithm used : Yolo algorithm for detection + centroid tracker to track vehicles Backend : opencv and python

6 Dec 21, 2022
Implementation of the federated dual coordinate descent (FedDCD) method.

FedDCD.jl Implementation of the federated dual coordinate descent (FedDCD) method. Installation To install, just call Pkg.add("https://github.com/Zhen

Zhenan Fan 6 Sep 21, 2022
A PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"

DGC-Net: Dense Geometric Correspondence Network This is a PyTorch implementation of our work "DGC-Net: Dense Geometric Correspondence Network" TL;DR A

191 Dec 16, 2022
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

NVIDIA Research Projects 10.1k Dec 28, 2022
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

3 Aug 03, 2022
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

tonne 1.4k Dec 29, 2022
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation πŸ˜„ Many thanks to th

Max 11 Oct 17, 2022
NL-Augmenter 🦎 β†’ 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 β†’ 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023