A custom DeepStack model for detecting 16 human actions.

Overview

DeepStack_ActionNET

This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for detecting 16 human actions present in the ActionNET Dataset dataset. Also included in this repository is that dataset with the YOLO annotations.

>> Watch Video Demo

  • Download DeepStack Model and Dataset
  • Create API and Detect Objects
  • Discover more Custom Models
  • Train your own Model

Download DeepStack Model and Dataset

You can download the pre-trained DeepStack_ActionNET model and the annotated dataset via the links below.

Create API and Detect Actions

The Trained Model can detect the following actions in images and videos.

  • calling
  • clapping
  • cycling
  • dancing
  • drinking
  • eating
  • fighting
  • hugging
  • kissing
  • laughing
  • listening-to-music
  • running
  • sitting
  • sleeping
  • texting
  • using-laptop

To start detecting, follow the steps below

  • Install DeepStack: Install DeepStack AI Server with instructions on DeepStack's documentation via https://docs.deepstack.cc

  • Download Custom Model: Download the trained custom model actionnetv2.pt from this GitHub release. Create a folder on your machine and move the downloaded model to this folder.

    E.g A path on Windows Machine C\Users\MyUser\Documents\DeepStack-Models, which will make your model file path C\Users\MyUser\Documents\DeepStack-Models\actionnet.pt

  • Run DeepStack: To run DeepStack AI Server with the custom ActionNET model, run the command that applies to your machine as detailed on DeepStack's documentation linked here.

    E.g

    For a Windows version, you run the command below

    deepstack --MODELSTORE-DETECTION "C\Users\MyUser\Documents\DeepStack-Models" --PORT 80

    For a Linux machine

    sudo docker run -v /home/MyUser/Documents/DeepStack-Models -p 80:5000 deepquestai/deepstack

    Once DeepStack runs, you will see a log like the one below in your Terminal/Console

    That means DeepStack is running your custom actionnet.pt model and now ready to start detecting actions images via the API endpoint http://localhost:80/v1/vision/custom/actionnet or http://your_machine_ip:80/v1/vision/custom/actionnet

  • Detect actions in image: You can detect objects in an image by sending a POST request to the url mentioned above with the paramater image set to an image using any proggramming language or with a tool like POSTMAN. For the purpose of this repository, we have provided a sample Python code below.

    • A sample image can be found in images/test.jpg of this repository

    • Install Python and install the DeepStack Python SDK via the command below

      pip install deepstack_sdk
    • Run the Python file detect.py in this repository.

      python detect.py
    • After the code runs, you will find a new image in images/test_detected.jpg with the detection visualized, with the following results printed in the Terminal/Console.

      Name: dancing
      Confidence: 0.91482425
      x_min: 270
      x_max: 516
      y_min: 18
      y_max: 480
      -----------------------
      

    • You can try running action detection for other images.

Discover more Custom Models

For more custom DeepStack models that has been trained and ready to use, visit the Custom Models sample page on DeepStack's documentation https://docs.deepstack.cc/custom-models-samples/ .

Train your own Model

If you will like to train a custom model yourself, follow the instructions below.

  • Prepare and Annotate: Collect images on and annotate object(s) you plan to detect as detailed here
  • Train your Model: Train the model as detailed here
You might also like...
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)

NExT-QA We reproduce some SOTA VideoQA methods to provide benchmark results for our NExT-QA dataset accepted to CVPR2021 (with 1 'Strong Accept' and 2

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions
An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions

Agar.io_Q-Learning_AI An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available act

Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Python TFLite scripts for detecting objects of any class in an image without knowing their label.
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Comments
  • How to download a Custom Model action net v2.pt in Deepstack Server Docker?

    How to download a Custom Model action net v2.pt in Deepstack Server Docker?

    Tell me how to load a custom action network model correctly v2.pt in the Deepstack server docker? Did I do the right thing?

    DeepStack: Version 2021.09.01 I created the /model store/detection folders and threw the action net file there v2.pt image

    After the reboot, I got a v1/vision/custom/action net v2 entry in the logs. Did I do the right thing? It just confuses me that there is a v1/vision/custom/action net v2 entry in the logs, and the rest are written like this.

    /v1/vision/face
    /v1/vision/face/recognize
    ....
    

    image

    Is it necessary to enter here as in the case of face and object recognition? image image

    opened by DivanX10 0
Releases(v2)
  • v2(Aug 26, 2021)

    Version 2 of the DeepStack Custom Model for object detection API to detect human actions in images and videos. It detects the following actions

    • calling
    • clapping
    • cycling
    • dancing
    • drinking
    • eating
    • fighting
    • hugging
    • kissing
    • laughing
    • listening-to-music
    • running
    • sitting
    • sleeping
    • texting
    • using-laptop

    Download the model actionnetv2.pt from the Assets section (below) in this release.

    This Model is a YOLOv5x DeepStack custom model and that was trained for 150 epochs, generating a best model with the following evaluation result.

    [email protected]: 0.995 [email protected]: 0.913

    Source code(tar.gz)
    Source code(zip)
    actionnetv2.pt(169.41 MB)
  • v1(Aug 14, 2021)

    A DeepStack Custom Model for object detection API to detect human actions in images and videos. It detects the following actions

    • calling
    • clapping
    • cycling
    • dancing
    • drinking
    • eating
    • fighting
    • hugging
    • kissing
    • laughing
    • listening-to-music
    • running
    • sitting
    • sleeping
    • texting
    • using-laptop

    Download the model actionnet.pt from the Assets section (below) in this release.

    This Model is a YOLOv5x DeepStack custom model and that was trained for 150 epochs, generating a best model with the following evaluation result.

    [email protected]: 0.9858 [email protected]: 0.8051

    Source code(tar.gz)
    Source code(zip)
    actionnet.pt(169.41 MB)
Owner
MOSES OLAFENWA
Software Engineer @Microsoft , A self-Taught computer programmer, Deep Learning, Computer Vision Researcher and Developer. Creator of ImageAI.
MOSES OLAFENWA
The implementation of DeBERTa

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 06, 2023
RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation (CIKM'17)

RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation This is the implementation of RATE: Overcoming Noise and Spar

Yu Zhang 5 Feb 10, 2022
Implementation of Shape and Electrostatic similarity metric in deepFMPO.

DeepFMPO v3D Code accompanying the paper "On the value of using 3D-shape and electrostatic similarities in deep generative methods". The paper can be

34 Nov 28, 2022
World Models with TensorFlow 2

World Models This repo reproduces the original implementation of World Models. This implementation uses TensorFlow 2.2. Docker The easiest way to hand

Zac Wellmer 234 Nov 30, 2022
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
Spatial Temporal Graph Convolutional Networks (ST-GCN) for Skeleton-Based Action Recognition in PyTorch

Reminder ST-GCN has transferred to MMSkeleton, and keep on developing as an flexible open source toolbox for skeleton-based human understanding. You a

sijie yan 1.1k Dec 25, 2022
SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP

scdlpicker SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP Objective This is a simple deep learning (DL) repicker module

Joachim Saul 6 May 13, 2022
Complete system for facial identity system

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

4 May 02, 2022
Tensorflow 2.x based implementation of EDSR, WDSR and SRGAN for single image super-resolution

Single Image Super-Resolution with EDSR, WDSR and SRGAN A Tensorflow 2.x based implementation of Enhanced Deep Residual Networks for Single Image Supe

Martin Krasser 1.3k Jan 06, 2023
Baseline for the Spoofing-aware Speaker Verification Challenge 2022

Introduction This repository contains several materials that supplements the Spoofing-Aware Speaker Verification (SASV) Challenge 2022 including: calc

40 Dec 28, 2022
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 05, 2023
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)

Swin-Transformer-Tensorflow A direct translation of the official PyTorch implementation of "Swin Transformer: Hierarchical Vision Transformer using Sh

52 Dec 29, 2022
An open-source project for applying deep learning to medical scenarios

Auto Vaidya An open source solution for creating end-end web app for employing the power of deep learning in various clinical scenarios like implant d

Smaranjit Ghose 18 May 29, 2022
PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Study-CSRNet-pytorch This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

0 Mar 01, 2022
EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation.

This repository contains data and code for our EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation. Please contact me at

9 Oct 28, 2022
Kaggle-titanic - A tutorial for Kaggle's Titanic: Machine Learning from Disaster competition. Demonstrates basic data munging, analysis, and visualization techniques. Shows examples of supervised machine learning techniques.

Kaggle-titanic This is a tutorial in an IPython Notebook for the Kaggle competition, Titanic Machine Learning From Disaster. The goal of this reposito

Andrew Conti 800 Dec 15, 2022
Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

yzf 1 Jun 12, 2022
This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.

Integrated Gradients This is the pytorch implementation of "Axiomatic Attribution for Deep Networks". The original tensorflow version could be found h

Tianhong Dai 150 Dec 23, 2022
Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling"

Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling" Pipeline of Tip-Adapter Tip-Adapter can provid

peng gao 187 Dec 28, 2022
Low-code/No-code approach for deep learning inference on devices

EzEdgeAI A concept project that uses a low-code/no-code approach to implement deep learning inference on devices. It provides a componentized framewor

On-Device AI Co., Ltd. 7 Apr 05, 2022