Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization

Overview

FAC-Net

Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization
Linjiang Huang (CUHK), Liang Wang (CASIA), Hongsheng Li (CUHK)

Paper: arXiv, ICCV

Overview

We argue that existing methods for weakly-supervised temporal activity localization cannot guarantee the foreground-action consistency, that is, the foreground and actions are mutually inclusive. Therefore, we propose a novel method named Foreground-Action Consistency Network (FAC-Net) to address this issue. The experimental results on THUMOS14 are as below.

Method \ mAP(%) @0.1 @0.2 @0.3 @0.4 @0.5 @0.6 @0.7 AVG
UntrimmedNet 44.4 37.7 28.2 21.1 13.7 - - -
STPN 52.0 44.7 35.5 25.8 16.9 9.9 4.3 27.0
W-TALC 55.2 49.6 40.1 31.1 22.8 - 7.6 -
AutoLoc - - 35.8 29.0 21.2 13.4 5.8 -
CleanNet - - 37.0 30.9 23.9 13.9 7.1 -
MAAN 59.8 50.8 41.1 30.6 20.3 12.0 6.9 31.6
CMCS 57.4 50.8 41.2 32.1 23.1 15.0 7.0 32.4
BM 60.4 56.0 46.6 37.5 26.8 17.6 9.0 36.3
RPN 62.3 57.0 48.2 37.2 27.9 16.7 8.1 36.8
DGAM 60.0 54.2 46.8 38.2 28.8 19.8 11.4 37.0
TSCN 63.4 57.6 47.8 37.7 28.7 19.4 10.2 37.8
EM-MIL 59.1 52.7 45.5 36.8 30.5 22.7 16.4 37.7
BaS-Net 58.2 52.3 44.6 36.0 27.0 18.6 10.4 35.3
A2CL-PT 61.2 56.1 48.1 39.0 30.1 19.2 10.6 37.8
ACM-BANet 64.6 57.7 48.9 40.9 32.3 21.9 13.5 39.9
HAM-Net 65.4 59.0 50.3 41.1 31.0 20.7 11.1 39.8
UM 67.5 61.2 52.3 43.4 33.7 22.9 12.1 41.9
FAC-Net (Ours) 67.6 62.1 52.6 44.3 33.4 22.5 12.7 42.2

Prerequisites

Recommended Environment

  • Python 3.6
  • Pytorch 1.2
  • Tensorboard Logger
  • CUDA 10.0

Data Preparation

  1. Prepare THUMOS'14 dataset.

    • We recommend using features and annotations provided by this repo.
  2. Place the features and annotations inside a dataset/Thumos14reduced/ folder.

Usage

Training

You can easily train the model by running the provided script.

  • Refer to train_options.py. Modify the argument of dataset-root to the path of your dataset folder.

  • Run the command below.

$ python train_main.py --run-type 0 --model-id 1   # rgb stream
$ python train_main.py --run-type 1 --model-id 2   # flow stream

Make sure you use different model-id for RGB and optical flow. Models are saved in ./ckpt/dataset_name/model_id/

Evaulation

The trained model can be found here. Please change the file name to xxx.pkl (e.g., 100.pkl) and put it into ./ckpt/dataset_name/model_id/. You can evaluate the model referring to the two stream evaluation process.

Single stream evaluation

  • Run the command below.
$ python train_main.py --pretrained --run-type 2 --model-id 1 --load-epoch 100  # rgb stream
$ python train_main.py --pretrained --run-type 3 --model-id 2 --load-epoch 100  # flow stream

load-epoch refers to the epoch of the best model. The best model would not always occur at 100 epoch, please refer to the log in the same folder of saved models to set the load epoch of the best model. Make sure you set the right model-id that corresponds to the model-id during training.

Two stream evaluation

  • Run the command below using our provided models.
$ python test_main.py --rgb-model-id 1 --flow-model-id 2 --rgb-load-epoch 100 --flow-load-epoch 100

References

We referenced the repos below for the code.

If you find this code useful, please cite our paper.

@InProceedings{Huang_2021_ICCV,
    author    = {Huang, Linjiang and Wang, Liang and Li, Hongsheng},
    title     = {Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {8002-8011}
}

Contact

If you have any question or comment, please contact the first author of the paper - Linjiang Huang ([email protected]).

Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Query Embedding on Hyper-Relational Knowledge Graphs This repository contains the code used for the experiments in the paper Query Embedding on Hyper-

DimitrisAlivas 19 Jul 26, 2022
Analysis of Antarctica sequencing samples contaminated with SARS-CoV-2

Analysis of SARS-CoV-2 reads in sequencing of 2018-2019 Antarctica samples in PRJNA692319 The samples analyzed here are described in this preprint, wh

Jesse Bloom 4 Feb 09, 2022
Vanilla and Prototypical Networks with Random Weights for image classification on Omniglot and mini-ImageNet. Made with Python3.

vanilla-rw-protonets-project Vanilla Prototypical Networks and PNs with Random Weights for image classification on Omniglot and mini-ImageNet. Made wi

Giovani Candido 8 Aug 31, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
Hierarchical Attentive Recurrent Tracking

Hierarchical Attentive Recurrent Tracking This is an official Tensorflow implementation of single object tracking in videos by using hierarchical atte

Adam Kosiorek 147 Aug 07, 2021
Structured Edge Detection Toolbox

################################################################### # # # Structure

Piotr Dollar 779 Jan 02, 2023
use tensorflow 2.0 to tell a dog and cat from a specified picture

dog_or_cat use tensorflow 2.0 to tell a dog and cat from a specified picture This is one of the classic experiments for the introduction of deep learn

你这个代码我看不懂 1 Oct 22, 2021
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 09, 2023
GANsformer: Generative Adversarial Transformers Drew A

GANformer: Generative Adversarial Transformers Drew A. Hudson* & C. Lawrence Zitnick Update: We released the new GANformer2 paper! *I wish to thank Ch

Drew Arad Hudson 1.2k Jan 02, 2023
Official Chainer implementation of GP-GAN: Towards Realistic High-Resolution Image Blending (ACMMM 2019, oral)

GP-GAN: Towards Realistic High-Resolution Image Blending (ACMMM 2019, oral) [Project] [Paper] [Demo] [Related Work: A2RL (for Auto Image Cropping)] [C

Wu Huikai 402 Dec 27, 2022
PED: DETR for Crowd Pedestrian Detection

PED: DETR for Crowd Pedestrian Detection Code for PED: DETR For (Crowd) Pedestrian Detection Paper PED: DETR for Crowd Pedestrian Detection Installati

36 Sep 13, 2022
A MatConvNet-based implementation of the Fully-Convolutional Networks for image segmentation

MatConvNet implementation of the FCN models for semantic segmentation This package contains an implementation of the FCN models (training and evaluati

VLFeat.org 175 Feb 18, 2022
Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

87 Dec 11, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
Dark Finix: All in one hacking framework with almost 100 tools

Dark Finix - Hacking Framework. Dark Finix is a all in one hacking framework wit

Md. Nur habib 2 Feb 18, 2022
Code to reproduce results from the paper "AmbientGAN: Generative models from lossy measurements"

AmbientGAN: Generative models from lossy measurements This repository provides code to reproduce results from the paper AmbientGAN: Generative models

Ashish Bora 87 Oct 19, 2022
CarND-LaneLines-P1 - Lane Finding Project for Self-Driving Car ND

Finding Lane Lines on the Road Overview When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are a

Udacity 769 Dec 27, 2022
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

71 Dec 14, 2022
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
PySLM Python Library for Selective Laser Melting and Additive Manufacturing

PySLM Python Library for Selective Laser Melting and Additive Manufacturing PySLM is a Python library for supporting development of input files used i

Dr Luke Parry 35 Dec 27, 2022