Official implementation of ACMMM'20 paper 'Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework'

Related tags

Deep LearningIIC
Overview

Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework

Official code for paper, Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework [ACMMM'20].

Arxiv paper Project page

Requirements

This is my experimental enviroment.

  • PyTorch 1.3.0

It seems that PyTorch 1.7.0 is not compatible with current codes, causing poor performance. #9

  • python 3.7.4
  • accimage

Inter-intra contrastive (IIC) framework

For samples, we have

  • Inter-positives: samples with same labels, not used for self-supervised learning;
  • Inter-negatives: different samples, or samples with different indexes;
  • Intra-positives: data from the same sample, in different views / from different augmentations;
  • Intra-negatives: data from the same sample while some kind of information has been broken down. In video case, temporal information has been destoried.

Our work makes use of all usable parts (in this classification category) to form an inter-intra contrastive framework. The experiments here are mainly based on Contrastive Multiview Coding.

It is flexible to extend this framework to other contrastive learning methods which use negative samples, such as MoCo and SimCLR.

image

Highlights

Make the most of data for contrastive learning.

Except for inter-negative samples, all possible data are used to help train the network. This inter-intra learning framework can make the most use of data in contrastive learning.

Flexibility of the framework

The inter-intra learning framework can be extended to

  • Different contrastive learning methods: CMC, MoCo, SimCLR ...
  • Different intra-negative generation methods: frame repeating, frame shuffling ...
  • Different backbones: C3D, R3D, R(2+1)D, I3D ...

Updates

Oct. 1, 2020 - Results using C3D and R(2+1)D are added; fix random seed more tightly. Aug. 26, 2020 - Add pretrained weights for R3D.

Usage of this repo

Notification: we have added codes to fix random seed more tightly for better reproducibility. However, results in our paper used previous random seed settings. Therefore, there should be tiny differences for the performance from that reported in our paper. To reproduce retrieval results same as our paper, please use the provided model weights.

Data preparation

You can download UCF101/HMDB51 dataset from official website: UCF101 and HMDB51. Then decoded videos to frames.
I highly recommend the pre-computed optical flow images and resized RGB frames in this repo.

If you use pre-computed frames, the folder architecture is like path/to/dataset/video_id/frames.jpg. If you decode frames on your own, the folder architecture may be like path/to/dataset/class_name/video_id/frames.jpg, in which way, you need pay more attention to the corresponding paths in dataset preparation.

For pre-computed frames, find rgb_folder, u_folder and v_folder in datasets/ucf101.py for UCF101 datasets and change them to meet your environment. Please note that all those modalities are prepared even though in some settings, optical flow data are not used train the model.

If you do not prepare optical flow data, simply set u_folder=rgb_folder and v_folder=rgb_folder should help to avoid errors.

Train self-supervised learning part

python train_ssl.py --dataset=ucf101

This equals to

python train_ssl.py --dataset=ucf101 --model=r3d --modality=res --neg=repeat

This default setting uses frame repeating as intra-negative samples for videos. R3D is used.

We use two views in our experiments. View #1 is a RGB video clip, View #2 can be RGB/Res/Optical flow video clip. Residual video clips are default modality for View #2. You can use --modality to try other modalities. Intra-negative samples are generated from View #1.

It may be wired to use only one optical flow channel u or v. We use only one channel to make it possible for only one model to handle inputs from different modalities. It is also an optional setting that using different models to handle each modality.

Retrieve video clips

python retrieve_clips.py --ckpt=/path/to/your/model --dataset=ucf101 --merge=True

One model is used to handle different views/modalities. You can set --modality to decide which modality to use. When setting --merge=True, RGB for View #1 and the specific modality for View #2 will be jointly used for joint retrieval.

By default training setting, it is very easy to get over 30%@top1 for video retrieval in ucf101 and around 13%@top1 in hmdb51 without joint retrieval.

Fine-tune model for video recognition

python ft_classify.py --ckpt=/path/to/your/model --dataset=ucf101

Testing will be automatically conducted at the end of training.

Or you can use

python ft_classify.py --ckpt=/path/to/your/model --dataset=ucf101 --mode=test

In this way, only testing is conducted using the given model.

Note: The accuracies using residual clips are not stable for validation set (this may also caused by limited validation samples), the final testing part will use the best model on validation set.

If everything is fine, you can achieve around 70% accuracy on UCF101. The results will vary from each other with different random seeds.

Results

Retrieval results

The table lists retrieval results on UCF101 split 1. We reimplemented CMC and report its results. Other results are from corresponding paper. VCOP, VCP, CMC, PRP, and ours are based on R3D network backbone.

Method top1 top5 top10 top20 top50
Jigsaw 19.7 28.5 33.5 40.0 49.4
OPN 19.9 28.7 34.0 40.6 51.6
R3D (random) 9.9 18.9 26.0 35.5 51.9
VCOP 14.1 30.3 40.4 51.1 66.5
VCP 18.6 33.6 42.5 53.5 68.1
CMC 26.4 37.7 45.1 53.2 66.3
Ours (repeat + res) 36.5 54.1 62.9 72.4 83.4
Ours (repeat + u) 41.8 60.4 69.5 78.4 87.7
Ours (shuffle + res) 34.6 53.0 62.3 71.7 82.4
Ours (shuffle + v) 42.4 60.9 69.2 77.1 86.5
PRP 22.8 38.5 46.7 55.2 69.1
RTT 26.1 48.5 59.1 69.6 82.8
MemDPC-RGB 20.2 40.4 52.4 64.7 -
MemDPC-Flow 40.2 63.2 71.9 78.6 -

Recognition results

We only use R3D as our network backbone. In this table, all reported results are pre-trained on UCF101.

Usually, 1. using Resnet-18-3D, R(2+1)D or deeper networks; 2.pre-training on larger datasets; 3. using larger input resolutions; 4. combining with audios or other features will also help.

Method UCF101 HMDB51
Jigsaw 51.5 22.5
O3N (res) 60.3 32.5
OPN 56.3 22.1
OPN (res) 71.8 36.7
CrossLearn 58.7 27.2
CMC (3 views) 59.1 26.7
R3D (random) 54.5 23.4
ImageNet-inflated 60.3 30.7
3D ST-puzzle 65.8 33.7
VCOP (R3D) 64.9 29.5
VCOP (R(2+1)D) 72.4 30.9
VCP (R3D) 66.0 31.5
Ours (repeat + res, R3D) 72.8 35.3
Ours (repeat + u, R3D) 72.7 36.8
Ours (shuffle + res, R3D) 74.4 38.3
Ours (shuffle + v, R3D) 67.0 34.0
PRP (R3D) 66.5 29.7
PRP (R(2+1)D) 72.1 35.0

Residual clips + 3D CNN The residual clips with 3D CNNs are effective, especially for scratch training. More information about this part can be found in Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition (previous but more detailed version) and Motion Representation Using Residual Frames with 3D CNN (short version with better results).

The key code for this part is

shift_x = torch.roll(x,1,2)
x = ((shift_x -x) + 1)/2

which is slightly different from that in papers.

We also reimplement VCP in this repo. By simply using residual clips, significant improvements can be obtained for both video retrieval and video recognition.

Pretrained weights

Pertrained weights from self-supervised training step: R3D(google drive).

With this model, for video retrieval, you should achieve

  • 33.4% @top1 with --modality=res --merge=False
  • 34.8% @top1 with --modality=rgb --merge=False
  • 36.5% @top1 with--modality=res --merge=True

Finetuned weights for action recognition: R3D(google drive).

With this model, for video recognition, you should achieve 72.7% @top1 with python ft_classify.py --model=r3d --modality=res --mode=test -ckpt=./path/to/model --dataset=ucf101 --split=1. This result is better than that reported in paper. Results may be further improved with strong data augmentations.

For any questions, please contact Li TAO ([email protected]).

Results for other network architectures

Results are averaged on 3 splits without using optical flow. R3D and R21D are the same as VCOP / VCP / PRP.

UCF101 top1 top5 top10 top20 top50 Recong
C3D (VCOP) 12.5 29.0 39.0 50.6 66.9 65.6
C3D (VCP) 17.3 31.5 42.0 52.6 67.7 68.5
C3D (PRP) 23.2 38.1 46.0 55.7 68.4 69.1
C3D (ours, repeat) 31.9 48.2 57.3 67.1 79.1 70.0
C3D (ours, shuffle) 28.9 45.4 55.5 66.2 78.8 69.7
R21D (VCOP) 10.7 25.9 35.4 47.3 63.9 72.4
R21D (VCP) 19.9 33.7 42.0 50.5 64.4 66.3
R21D (PRP) 20.3 34.0 41.9 51.7 64.2 72.1
R21D (ours, repeat) 34.7 51.7 60.9 69.4 81.9 72.4
R21D (ours, shuffle) 30.2 45.6 55.0 64.4 77.6 73.3
Res18-3D (ours, repeat) 36.8 54.1 63.1 72.0 83.3 72.4
Res18-3D (ours, shuffle) 33.0 49.2 59.1 69.1 80.6 73.1
HMDB51 top1 top5 top10 top20 top50 Recong
C3D (VCOP) 7.4 22.6 34.4 48.5 70.1 28.4
C3D (VCP) 7.8 23.8 35.3 49.3 71.6 32.5
C3D (PRP) 10.5 27.2 40.4 56.2 75.9 34.5
C3D (ours, repeat) 9.9 29.6 42.0 57.3 78.4 30.8
C3D (ours, shuffle) 11.5 31.3 43.9 60.1 80.3 29.7
R21D (VCOP) 5.7 19.5 30.7 45.6 67.0 30.9
R21D (VCP) 6.7 21.3 32.7 49.2 73.3 32.2
R21D (PRP) 8.2 25.3 36.2 51.0 73.0 35.0
R21D (ours, repeat) 12.7 33.3 45.8 61.6 81.3 34.0
R21D (ours, shuffle) 12.6 31.9 44.2 59.9 80.7 31.2
Res18-3D (ours, repeat) 15.5 34.4 48.9 63.8 83.8 34.3
Res18-3D (ours, shuffle) 12.4 33.6 46.9 63.2 83.5 34.3

Citation

If you find our work helpful for your research, please consider citing the paper

@article{tao2020selfsupervised,
    title={Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework},
    author={Li Tao and Xueting Wang and Toshihiko Yamasaki},
    journal={arXiv preprint arXiv:2008.02531},
    year={2020},
    eprint={2008.02531},
}

If you find the residual input helpful for video-related tasks, please consider citing the paper

@article{tao2020rethinking,
  title={Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition},
  author={Tao, Li and Wang, Xueting and Yamasaki, Toshihiko},
  journal={arXiv preprint arXiv:2001.05661},
  year={2020}
}

@article{tao2020motion,
  title={Motion Representation Using Residual Frames with 3D CNN},
  author={Tao, Li and Wang, Xueting and Yamasaki, Toshihiko},
  journal={arXiv preprint arXiv:2006.13017},
  year={2020}
}

Acknowledgements

Part of this code is inspired by CMC and VCOP.

Owner
Li Tao
Li Tao
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer.

DocEnTR Description Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on to

Mohamed Ali Souibgui 74 Jan 07, 2023
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and ap

3.4k Jan 04, 2023
Research code for Arxiv paper "Camera Motion Agnostic 3D Human Pose Estimation"

GMR(Camera Motion Agnostic 3D Human Pose Estimation) This repo provides the source code of our arXiv paper: Seong Hyun Kim, Sunwon Jeong, Sungbum Park

Seong Hyun Kim 1 Feb 07, 2022
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
TagLab: an image segmentation tool oriented to marine data analysis

TagLab: an image segmentation tool oriented to marine data analysis TagLab was created to support the activity of annotation and extraction of statist

Visual Computing Lab - ISTI - CNR 49 Dec 29, 2022
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
Ansible Automation Example: JSNAPY PRE/POST Upgrade Validation

Ansible Automation Example: JSNAPY PRE/POST Upgrade Validation Overview This example will show how to validate the status of our firewall before and a

Calvin Remsburg 1 Jan 07, 2022
implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning"

MarginGAN This repository is the implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning". 1."preliminary" is the imp

Van 7 Dec 23, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

32 Sep 21, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

43 Nov 19, 2022
Only works with the dashboard version / branch of jesse

Jesse optuna Only works with the dashboard version / branch of jesse. The config.yml should be self-explainatory. Installation # install from git pip

Markus K. 8 Dec 04, 2022
The code of “Similarity Reasoning and Filtration for Image-Text Matching” [AAAI2021]

SGRAF PyTorch implementation for AAAI2021 paper of “Similarity Reasoning and Filtration for Image-Text Matching”. It is built on top of the SCAN and C

Ronnie_IIAU 149 Dec 22, 2022
[제 13회 투빅스 컨퍼런스] OK Mugle! - 장르부터 멜로디까지, Content-based Music Recommendation

Ok Mugle! 🎵 장르부터 멜로디까지, Content-based Music Recommendation 'Ok Mugle!'은 제13회 투빅스 컨퍼런스(2022.01.15)에서 진행한 음악 추천 프로젝트입니다. Description 📖 본 프로젝트에서는 Kakao

SeongBeomLEE 5 Oct 09, 2022
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
Fastquant - Backtest and optimize your trading strategies with only 3 lines of code!

fastquant 🤓 Bringing backtesting to the mainstream fastquant allows you to easily backtest investment strategies with as few as 3 lines of python cod

Lorenzo Ampil 1k Dec 29, 2022
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
Code corresponding to The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents

The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents This is the code corresponding to The Introspective

0 Jan 10, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022