PyTorch implementations of the paper: "DR.VIC: Decomposition and Reasoning for Video Individual Counting, CVPR, 2022"

Related tags

Deep LearningDRNet
Overview

DRNet for Video Indvidual Counting (CVPR 2022)

Introduction

This is the official PyTorch implementation of paper: DR.VIC: Decomposition and Reasoning for Video Individual Counting. Different from the single image counting methods, it counts the total number of the pedestrians in a video sequence with a person in different frames only being calculated once. DRNet decomposes this new task to estimate the initial crowd number in the first frame and integrate differential crowd numbers in a set of following image pairs (namely current frame and preceding frame). framework

Catalog

  • Testing Code (2022.3.19)
  • PyTorch pretrained models (2022.3.19)
  • Training Code
    • HT21
    • SenseCrowd

Getting started

preparatoin

  • Clone this repo in the directory (Root/DRNet):

  • Install dependencies. We use python 3.7 and pytorch >= 1.6.0 : http://pytorch.org.

    conda create -n DRNet python=3.7
    conda activate DRNet
    conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
    cd ${DRNet}
    pip install -r requirements.txt
  • PreciseRoIPooling for extracting the feature descriptors

    Note: the PreciseRoIPooling [1] module is included in the repo, but it's likely to have some problems when running the code:

    1. If you are prompted to install ninja, the following commands will help you.
      wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
      sudo unzip ninja-linux.zip -d /usr/local/bin/
      sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force 
    2. If you encounter errors when compiling the PreciseRoIPooling, you can look up the original repo's issues for help.
  • Datasets

    • HT21 dataset: Download CroHD dataset from this link. Unzip HT21.zip and place HT21 into the folder (Root/dataset/).
    • SenseCrowd dataset: To be updated when it is released.
    • Download the lists of train/val/test sets at link: dataset., and place them to each dataset folder, respectively.

Training

Check some parameters in config.py before training,

  • Use __C.DATASET = 'HT21' to set the dataset (default: HT21).
  • Use __C.GPU_ID = '0' to set the GPU.
  • Use __C.MAX_EPOCH = 20 to set the number of the training epochs (default:20).
  • Use __C.EXP_PATH = os.path.join('./exp', __C.DATASET) to set the dictionary for saving the code, weights, and resume point.

Check other parameters (TRAIN_BATCH_SIZE, TRAIN_SIZE etc.) in the Root/DRNet/datasets/setting in case your GPU's memory is not support for the default setting.

  • run python train.py.

Tips: The training process takes ~10 hours on HT21 dataset with one TITAN RTX (24GB Memory).

Testing

To reproduce the performance, download the pre-trained models and then place pretrained_models folder to Root/DRNet/model/

  • for HT21:
    • Run python test_HT21.py.
  • for SenseCrowd:
    • Run python test_SENSE.py. Then the output file (*_SENSE_cnt.py) will be generated.

Performance

The results on HT21 and SenseCrowd.

  • HT21 dataset
Method CroHD11~CroHD15 MAE/MSE/MRAE(%)
Paper: VGG+FPN [2,3] 164.6/1075.5/752.8/784.5/382.3 141.1/192.3/27.4
This Repo's Reproduction: VGG+FPN [2,3] 138.4/1017.5/623.9/659.8/348.5 160.7/217.3/25.1
  • SenseCrowd dataset
Method MAE/MSE/MRAE(%) MIAE/MOAE D0~D4 (for MAE)
Paper: VGG+FPN [2,3] 12.3/24.7/12.7 1.98/2.01 4.1/8.0/23.3/50.0/77.0
This Repo's Reproduction: VGG+FPN [2,3] 11.7/24.6/11.7 1.99/1.88 3.6/6.8/22.4/42.6/85.2

Video Demo

Please visit bilibili or YouTube to watch the video demonstration. demo

References

  1. Acquisition of Localization Confidence for Accurate Object Detection, ECCV, 2018.
  2. Very Deep Convolutional Networks for Large-scale Image Recognition, arXiv, 2014.
  3. Feature Pyramid Networks for Object Detection, CVPR, 2017.

Citation

If you find this project is useful for your research, please cite:

@article{han2022drvic,
  title={DR.VIC: Decomposition and Reasoning for Video Individual Counting},
  author={Han, Tao, Bai Lei, Gao, Junyu, Qi Wang, and Ouyang  Wanli},
  booktitle={CVPR},
  year={2022}
}

Acknowledgement

The released PyTorch training script borrows some codes from the C^3 Framework and SuperGlue repositories. If you think this repo is helpful for your research, please consider cite them.

Owner
tao han
tao han
[ArXiv 2021] One-Shot Generative Domain Adaptation

GenDA - One-Shot Generative Domain Adaptation One-Shot Generative Domain Adaptation Ceyuan Yang*, Yujun Shen*, Zhiyi Zhang, Yinghao Xu, Jiapeng Zhu, Z

GenForce: May Generative Force Be with You 46 Dec 19, 2022
Autoregressive Models in PyTorch.

Autoregressive This repository contains all the necessary PyTorch code, tailored to my presentation, to train and generate data from WaveNet-like auto

Christoph Heindl 41 Oct 09, 2022
This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting Project Page | YouTube | Paper This is the official PyTorch implementation of the C

Zhuoqian Yang 330 Dec 11, 2022
Perturb-and-max-product: Sampling and learning in discrete energy-based models

Perturb-and-max-product: Sampling and learning in discrete energy-based models This repo contains code for reproducing the results in the paper Pertur

Vicarious 2 Mar 14, 2022
TyXe: Pyro-based BNNs for Pytorch users

TyXe: Pyro-based BNNs for Pytorch users TyXe aims to simplify the process of turning Pytorch neural networks into Bayesian neural networks by leveragi

87 Jan 03, 2023
Official code for Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Official code for our Interspeech 2021 - Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset [1]*. Visually-grounded spoken language datasets c

Ian Palmer 3 Jan 26, 2022
LRBoost is a scikit-learn compatible approach to performing linear residual based stacking/boosting.

LRBoost is a sckit-learn compatible package for linear residual boosting. LRBoost combines a linear estimator and a non-linear estimator to leverage t

Andrew Patton 5 Nov 23, 2022
Solving reinforcement learning tasks which require language and vision

Multimodal Reinforcement Learning JAX implementations of the following multimodal reinforcement learning approaches. Dual-coding Episodic Memory from

Henry Prior 31 Feb 26, 2022
A font family with a great monospaced variant for programmers.

Fantasque Sans Mono A programming font, designed with functionality in mind, and with some wibbly-wobbly handwriting-like fuzziness that makes it unas

Jany Belluz 6.3k Jan 08, 2023
Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch

Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch Reference Paper URL Author: Yi Tay, Dara Bahri, Donald Metzler

Myeongjun Kim 66 Nov 30, 2022
PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.5k Jan 02, 2023
PyTorch Implementation of PIXOR: Real-time 3D Object Detection from Point Clouds

PIXOR: Real-time 3D Object Detection from Point Clouds This is a custom implementation of the paper from Uber ATG using PyTorch 1.0. It represents the

Philip Huang 270 Dec 14, 2022
MEDS: Enhancing Memory Error Detection for Large-Scale Applications

MEDS: Enhancing Memory Error Detection for Large-Scale Applications Prerequisites cmake and clang Build MEDS supporting compiler $ make Build Using Do

Secomp Lab at Purdue University 34 Dec 14, 2022
The code for 'Deep Residual Fourier Transformation for Single Image Deblurring'

Deep Residual Fourier Transformation for Single Image Deblurring Xintian Mao, Yiming Liu, Wei Shen, Qingli Li and Yan Wang code will be released soon

145 Dec 13, 2022
LinkNet - This repository contains our Torch7 implementation of the network developed by us at e-Lab.

LinkNet This repository contains our Torch7 implementation of the network developed by us at e-Lab. You can go to our blogpost or read the article Lin

e-Lab 158 Nov 11, 2022
NR-GAN: Noise Robust Generative Adversarial Networks

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

Takuhiro Kaneko 59 Dec 11, 2022
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022
Low Complexity Channel estimation with Neural Network Solutions

Interpolation-ResNet Invited paper for WSA 2021, called 'Low Complexity Channel estimation with Neural Network Solutions'. Low complexity residual con

Dianxin 10 Dec 10, 2022
DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation

DFFNet Paper DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation. Xiangyan Tang, Wenxuan Tu, Keqiu Li, J

4 Sep 23, 2022
i-RevNet Pytorch Code

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succ

Jörn Jacobsen 378 Dec 06, 2022