Multi-Stage Spatial-Temporal Convolutional Neural Network (MS-GCN)

Related tags

Deep LearningMS-GCN
Overview

Multi-Stage Spatial-Temporal Convolutional Neural Network (MS-GCN)

This code implements the skeleton-based action segmentation MS-GCN model from Automated freezing of gait assessment with marker-based motion capture and multi-stage spatial-temporal graph convolutional neural networks and Skeleton-based action segmentation with multi-stage spatial-temporal graph convolutional neural networks, arXiv 2022 (in-review).

It was originally developed for freezing of gait (FOG) assessment on a proprietary dataset. Recently, we have also achieved high skeleton-based action segmentation performance on public datasets, e.g. HuGaDB, LARa, PKU-MMD v2, TUG.

Requirements

Tested on Ubuntu 16.04 and Pytorch 1.10.1. Models were trained on a Nvidia Tesla K80.

The c3d data preparation script requires Biomechanical-Toolkit. For installation instructions, please refer to the following issue.

Content

  • data_prep/ -- Data preparation scripts.
  • main.py -- Main script. I suggest working with this interactively with an IDE. Please provide the dataset and train/predict arguments, e.g. --dataset=fog_example --action=train.
  • batch_gen.py -- Batch loader.
  • label_eval.py -- Compute metrics and save prediction results.
  • model.py -- train/predict script.
  • models/ -- Location for saving the trained models.
  • models/ms_gcn.py -- The MS-GCN model.
  • models/net_utils/ -- Scripts to partition the graph for the various datasets. For more information about the partitioning, please refer to the section Graph representations. For more information about spatial-temporal graphs, please refer to ST-GCN.
  • data/ -- Location for the processed datasets. For more information, please refer to the 'FOG' example.
  • data/signals. -- Scripts for computing the feature representations. Used for datasets that provided spatial features per joint, e.g. FOG, TUG, and PKU-MMD v2. For more information, please refer to the section Graph representations.
  • results/ -- Location for saving the results.

Data

After processing the dataset (scripts are dataset specific), each processed dataset should be placed in the data folder. We provide an example for a motion capture dataset that is in c3d format. For this particular example, we extract 9 joints in 3D:

  • data_prep/read_frame.py -- Import the joints and action labels from the c3d and save both in a separate csv.
  • data_prep/gen_data/ -- Import the csv, construct the input, and save to npy for training. For more information about the input and label shape, please refer to the section Problem statement.

Please refer to the example in data/example/ for more information on how to structure the files for training/prediction.

Pre-trained models

Pre-trained models are provided for HuGaDB, PKU-MMD, and LARa. To reproduce the results from the paper:

  • The dataset should be downloaded from their respective repository.
  • See the "Data" section for more information on how to prepare the datasets.
  • Place the pre-trained models in models/, e.g. models/hugadb.
  • Ensure that the correct graph representation is chosen in ms_gcn.
  • Comment out features = get_features(features) in model (only for lara and hugadb).
  • Specify the correct sampling rate, e.g. downsampling factor of 4 for lara.
  • Run main to generate the per-sample predictions with proper arguments, e.g. --dataset=hugadb --action=predict.
  • Run label_eval with proper arguments, e.g. --dataset=hugadb.

Acknowledgements

The MS-GCN model and code are heavily based on ST-GCN and MS-TCN. We thank the authors for publicly releasing their code.

License

MIT

Owner
Benjamin Filtjens
PhD Student working towards at-home freezing of gait detection https://orcid.org/0000-0003-2609-6883
Benjamin Filtjens
Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Intro Real-time object detection and classification. Paper: version 1, version 2. Read more about YOLO (in darknet) and download weight files here. In

Trieu 6.1k Dec 30, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
Exploiting Robust Unsupervised Video Person Re-identification

Exploiting Robust Unsupervised Video Person Re-identification Implementation of the proposed uPMnet. For the preprint, please refer to [Arxiv]. Gettin

1 Apr 09, 2022
Codes for building and training the neural network model described in Domain-informed neural networks for interaction localization within astroparticle experiments.

Domain-informed Neural Networks Codes for building and training the neural network model described in Domain-informed neural networks for interaction

DIDACTS 0 Dec 13, 2021
BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

BasicRL: easy and fundamental codes for deep reinforcement learning BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up. It is

RayYoh 12 Apr 28, 2022
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Facebook Research 712 Dec 19, 2022
PyTorch implementation of the paper Ultra Fast Structure-aware Deep Lane Detection

PyTorch implementation of the paper Ultra Fast Structure-aware Deep Lane Detection

1.4k Jan 06, 2023
Pixray is an image generation system

Pixray is an image generation system

pixray 883 Jan 07, 2023
A Framework for Encrypted Machine Learning in TensorFlow

TF Encrypted is a framework for encrypted machine learning in TensorFlow. It looks and feels like TensorFlow, taking advantage of the ease-of-use of t

TF Encrypted 0 Jul 06, 2022
Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM) Introduction The average lifetime of the $D^{0}$ me

Son Gyo Jung 1 Dec 17, 2021
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

XiaoMing 5 Aug 19, 2022
Multivariate Boosted TRee

Multivariate Boosted TRee What is MBTR MBTR is a python package for multivariate boosted tree regressors trained in parameter space. The package can h

SUPSI-DACD-ISAAC 61 Dec 19, 2022
Code for paper 'Hand-Object Contact Consistency Reasoning for Human Grasps Generation' at ICCV 2021

GraspTTA Hand-Object Contact Consistency Reasoning for Human Grasps Generation (ICCV 2021). Project Page with Videos Demo Quick Results Visualization

Hanwen Jiang 47 Dec 09, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022
Auto-updating data to assist in investment to NEPSE

Symbol Ratios Summary Sector LTP Undervalued Bonus % MEGA Strong Commercial Banks 368 5 10 JBBL Strong Development Banks 568 5 10 SIFC Strong Finance

Amit Chaudhary 16 Nov 01, 2022
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Contrastive Unpaired Translation (CUT) video (1m) | video (10m) | website | paper We provide our PyTorch implementation of unpaired image-to-image tra

1.7k Dec 27, 2022
🔮 Execution time predictions for deep neural network training iterations across different GPUs.

Habitat: A Runtime-Based Computational Performance Predictor for Deep Neural Network Training Habitat is a tool that predicts a deep neural network's

Geoffrey Yu 44 Dec 27, 2022
Fast and robust certifiable relative pose estimation

Fast and Robust Relative Pose Estimation for Calibrated Cameras This repository contains the code for the relative pose estimation between two central

42 Dec 06, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 04, 2023
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022