Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

Related tags

Deep LearningGSDT
Overview

GSDT

Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here. If you find our work useful, we'd appreciate you citing our paper as follows:

@article{Wang2020_GSDT, 
author = {Wang, Yongxin and Kitani, Kris and Weng, Xinshuo}, 
journal = {arXiv:2006.13164}, 
title = {{Joint Object Detection and Multi-Object Tracking with Graph Neural Networks}}, 
year = {2020} 
}

Introduction

Object detection and data association are critical components in multi-object tracking (MOT) systems. Despite the fact that the two components are dependent on each other, prior work often designs detection and data association modules separately which are trained with different objectives. As a result, we cannot back-propagate the gradients and optimize the entire MOT system, which leads to sub-optimal performance. To address this issue, recent work simultaneously optimizes detection and data association modules under a joint MOT framework, which has shown improved performance in both modules. In this work, we propose a new instance of joint MOT approach based on Graph Neural Networks (GNNs). The key idea is that GNNs can model relations between variable-sized objects in both the spatial and temporal domains, which is essential for learning discriminative features for detection and data association. Through extensive experiments on the MOT15/16/17/20 datasets, we demonstrate the effectiveness of our GNN-based joint MOT approach and show the state-of-the-art performance for both detection and MOT tasks.

Usage

Dependencies

We recommend using anaconda for managing dependency and environments. You may follow the commands below to setup your environment.

conda create -n dev python=3.6
conda activate dev
pip install -r requirements.txt

We use the PyTorch Geometric package for the implementation of our Graph Neural Network based architecture.

bash install_pyg.sh   # we used CUDA_version=cu101 

Build Deformable Convolutional Networks V2 (DCNv2)

cd ./src/lib/models/networks/DCNv2
bash make.sh

To automatically generate output tracking as videos, please install ffmpeg

conda install ffmpeg=4.2.2

Data preperation

We follow the same dataset setup as in JDE. Please refer to their DATA ZOO for data download and preperation.

To prepare 2DMOT15 and MOT20 data, you can directly download from the MOT Challenge website, and format each directory as follows:

MOT15
   |——————images
   |        └——————train
   |        └——————test
   └——————labels_with_ids
            └——————train(empty)
MOT20
   |——————images
   |        └——————train
   |        └——————test
   └——————labels_with_ids
            └——————train(empty)

Then change the seq_root and label_root in src/gen_labels_15.py and src/gen_labels_20.py accordingly, and run:

cd src
python gen_labels_15.py
python gen_labels_20.py

This will generate the desired label format of 2DMOT15 and MOT20. The seqinfo.ini files are required for 2DMOT15 and can be found here [Google], [Baidu],code:8o0w.

Inference

Download and save the pretrained weights for each dataset by following the links below:

Dataset Model
2DMOT15 model_mot15.pth
MOT17 model_mot17.pth
MOT20 model_mot20.pth

Run one of the following command to reproduce our paper's tracking performance on the MOT Challenge.

cd ./experiments
track_gnn_mot_AGNNConv_RoIAlign_mot15.sh 
track_gnn_mot_AGNNConv_RoIAlign_mot17.sh 
track_gnn_mot_AGNNConv_RoIAlign_mot20.sh 

To clarify, currently we directly used the MOT17 results as MOT16 results for submission. That is, our MOT16 and MOT17 results and models are identical.

Training

We are currently in the process of cleaning the training code. We'll release as soon as we can. Stay tuned!

Performance on MOT Challenge

You can refer to MOTChallenge website for performance of our method. For your convenience, we summarize results below:

Dataset MOTA IDF1 MT ML IDS
2DMOT15 60.7 64.6 47.0% 10.5% 477
MOT16 66.7 69.2 38.6% 19.0% 959
MOT17 66.2 68.7 40.8% 18.3% 3318
MOT20 67.1 67.5 53.1% 13.2% 3133

Acknowledgement

A large part of the code is borrowed from FairMOT. We appreciate their great work!

Owner
Richard Wang
Richard Wang
MicRank is a Learning to Rank neural channel selection framework where a DNN is trained to rank microphone channels.

MicRank: Learning to Rank Microphones for Distant Speech Recognition Application Scenario Many applications nowadays envision the presence of multiple

Samuele Cornell 20 Nov 10, 2022
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL)

LUPerson-NL Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL) The repository is for our CVPR2022 paper Large-Scale

43 Dec 26, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
Title: Graduate-Admissions-Predictor

The purpose of this project is create a predictive model capable of identifying the probability of a person securing an admit based on their personal profile parameters. Simplified visualisations hav

Akarsh Singh 1 Jan 26, 2022
Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection".

A2S-USOD Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection". Code will be released upon

15 Dec 16, 2022
Code for approximate graph reduction techniques for cardinality-based DSFM, from paper

SparseCard Code for approximate graph reduction techniques for cardinality-based DSFM, from paper "Approximate Decomposable Submodular Function Minimi

Nate Veldt 1 Nov 25, 2022
Code to reproduce the results for Compositional Attention

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 58 Nov 30, 2022
Algo-burn - Script to configure an Algorand address as a "burn" address for one or more ASA tokens

Algorand Burn Address This is a simple script to illustrate how a "burn address"

GSD 5 May 10, 2022
Extracting knowledge graphs from language models as a diagnostic benchmark of model performance.

Interpreting Language Models Through Knowledge Graph Extraction Idea: How do we interpret what a language model learns at various stages of training?

EPFL Machine Learning and Optimization Laboratory 9 Oct 25, 2022
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

25 Oct 10, 2022
Prompt Tuning with Rules

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
[ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang

Undistillable: Making A Nasty Teacher That CANNOT teach students "Undistillable: Making A Nasty Teacher That CANNOT teach students" Haoyu Ma, Tianlong

VITA 71 Dec 28, 2022
Implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networks, using PyTorch

C-CNN: Contourlet Convolutional Neural Networks This repo implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networ

Goh Kun Shun (KHUN) 10 Nov 03, 2022
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021)

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) Overview Prerequisites Linux Pytho

Shaojie Li 34 Mar 31, 2022
Angle data is a simple data type.

angledat Angle data is a simple data type. Installing + using Put angledat.py in the main dir of your project. Import it and use. Comments Comments st

1 Jan 05, 2022
Tensorflow implementation of "Learning Deconvolution Network for Semantic Segmentation"

Tensorflow implementation of Learning Deconvolution Network for Semantic Segmentation. Install Instructions Works with tensorflow 1.11.0 and uses the

Fabian Bormann 224 Apr 15, 2022