《Fst Lerning of Temporl Action Proposl vi Dense Boundry Genertor》(AAAI 2020)

Overview

Update

  • 2020.03.13: Release tensorflow-version and pytorch-version DBG complete code.
  • 2019.11.12: Release tensorflow-version DBG inference code.
  • 2019.11.11: DBG is accepted by AAAI2020.
  • 2019.11.08: Our ensemble DBG ranks No.1 on ActivityNet

Introduction

In this repo, we propose a novel and unified action detection framework, named DBG, with superior performance over the state-of-the-art action detectors BSN and BMN. You can use the code to evaluate our DBG for action proposal generation or action detection. For more details, please refer to our paper Fast Learning of Temporal Action Proposal via Dense Boundary Generator!

Contents

Paper Introduction

image

This paper introduces a novel and unified temporal action proposal generator named Dense Boundary Generator (DBG). In this work, we propose dual stream BaseNet to generate two different level and more discriminative features. We then adopt a temporal boundary classification module to predict precise temporal boundaries, and an action-aware completeness regression module to provide reliable action completeness confidence.

ActivityNet1.3 Results

image

THUMOS14 Results

image

Qualitative Results

Prerequisites

  • Tensorflow == 1.9.0 or PyTorch == 1.1
  • Python == 3.6
  • NVIDIA GPU == Tesla P40
  • Linux CUDA 9.0 CuDNN
  • gcc 5

Getting Started

Installation

Clone the github repository. We will call the cloned directory as $DBG_ROOT.

cd $DBG_ROOT

Firstly, you should compile our proposal feature generation layers.

Please compile according to the framework you need.

Compile tensorflow-version proposal feature generation layers:

cd tensorflow/custom_op
make

Compile pytorch-version proposal feature generation layers:

cd pytorch/custom_op
python setup.py install

Download Datasets

Prepare ActivityNet 1.3 dataset. You can use official ActivityNet downloader to download videos from the YouTube. Some videos have been deleted from YouTube,and you can also ask for the whole dataset by email.

Extract visual feature, we adopt TSN model pretrained on the training set of ActivityNet, Please refer this repo TSN-yjxiong to extract frames and optical flow and refer this repo anet2016-cuhk to find pretrained TSN model.

For convenience of training and testing, we rescale the feature length of all videos to same length 100, and we provide the 19993 rescaled feature at here Google Cloud or 微云. Then put the features to data/tsn_anet200 directory.

For generating the video features, scripts in ./tools will help you to start from scrach.

Testing of DBG

If you don't want to train the model, you can run the testing code directly using the pretrained model.

Pretrained model is included in output/pretrained_model and set parameters on config/config_pretrained.yaml. Please check the feat_dir in config/config_pretrained.yaml and use scripts to run DBG.

# TensorFlow version (AUC result = 68.37%):
python tensorflow/test.py config/config_pretrained.yaml
python post_processing.py output/result/ results/result_proposals.json
python eval.py results/result_proposals.json

# PyTorch version (AUC result = 68.26%):
python pytorch/test.py config/config_pretrained.yaml
python post_processing.py output/result/ results/result_proposals.json
python eval.py results/result_proposals.json

Training of DBG

We also provide training code of tensorflow and pytorch version. Please check the feat_dir in config/config.yaml and follow these steps to train your model:

1. Training

# TensorFlow version:
python tensorflow/train.py config/config.yaml

# PyTorch version:
python pytorch/train.py config/config.yaml

2. Testing

# TensorFlow version:
python tensorflow/test.py config/config.yaml

# PyTorch version:
python pytorch/test.py config/config.yaml

3. Postprocessing

python post_processing.py output/result/ results/result_proposals.json

4. Evaluation

python eval.py results/result_proposals.json

Citation

If you find DBG useful in your research, please consider citing:

@inproceedings{DBG2020arXiv,
  author    = {Chuming Lin*, Jian Li*, Yabiao Wang, Ying Tai, Donghao Luo, Zhipeng Cui, Chengjie Wang, Jilin Li, Feiyue Huang, Rongrong Ji},
  title     = {Fast Learning of Temporal Action Proposal via Dense Boundary Generator},
  booktitle   = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
}

Contact

For any question, please file an issue or contact

Jian Li: [email protected]
Chuming Lin: [email protected]
Owner
Tencent
Tencent
A developer interface for creating Chat AIs for the Chai app.

ChaiPy A developer interface for creating Chat AIs for the Chai app. Usage Local development A quick start guide is available here, with a minimal exa

Chai 28 Dec 28, 2022
An Implicit Function Theorem (IFT) optimizer for bi-level optimizations

iftopt An Implicit Function Theorem (IFT) optimizer for bi-level optimizations. Requirements Python 3.7+ PyTorch 1.x Installation $ pip install git+ht

The Money Shredder Lab 2 Dec 02, 2021
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 08, 2023
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21'

Argument Extraction by Generation Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21' Dependencies pytorch=1.6 tr

Zoey Li 87 Dec 26, 2022
FluxTraining.jl gives you an endlessly extensible training loop for deep learning

A flexible neural net training library inspired by fast.ai

86 Dec 31, 2022
Betafold - AlphaFold with tunings

BetaFold We (hegelab.org) craeted this standalone AlphaFold (AlphaFold-Multimer,

2 Aug 11, 2022
Generate indoor scenes with Transformers

SceneFormer: Indoor Scene Generation with Transformers Initial code release for the Sceneformer paper, contains models, train and test scripts for the

Chandan Yeshwanth 110 Dec 06, 2022
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data

GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data By Shuchang Zhou, Taihong Xiao, Yi Yang, Dieqiao Feng, Qinyao He, W

Taihong Xiao 141 Apr 16, 2021
Block Sparse movement pruning

Movement Pruning: Adaptive Sparsity by Fine-Tuning Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; ho

Hugging Face 54 Dec 20, 2022
Nicholas Lee 3 Jan 09, 2022
KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control

KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control Tomas Jakab, Richard Tucker, Ameesh Makadia, Jiajun Wu, Noah Snavely, Angjoo Ka

Tomas Jakab 87 Nov 30, 2022
🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

Made With ML 82 Jun 26, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022
A curated list of Generative Deep Art projects, tools, artworks, and models

Generative Deep Art A curated list of Generative Deep Art projects, tools, artworks, and models Inbox Get started with making AI art in 2022 – deeplea

Filipe Calegario 251 Jan 03, 2023
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
Norm-based Analysis of Transformer

Norm-based Analysis of Transformer Implementations for 2 papers introducing to analyze Transformers using vector norms: Kobayashi+'20 Attention is Not

Goro Kobayashi 52 Dec 05, 2022