Starter code for the ICCV 2021 paper, 'Detecting Invisible People'

Overview

Detecting Invisible People

[ICCV 2021 Paper] [Website]

Tarasha Khurana, Achal Dave, Deva Ramanan

Introduction

This repository contains code for Detecting Invisible People. We extend the original DeepSORT algorithm to localize people even while they are completely occluded in a video. See the arXiv preprint for more information.

Dependencies

Create a conda environment with the given environment.yml file.

conda env create -f environment.yml

Preprocessing

The code expects the directory structure of your dataset in the MOT Challenge data format, which is approximately like the following:

MOT17/
-- train/
---- seq_01/
------ img1/
------ img1Depth/
------ gt/
------ det/
...
-- test/
---- seq_02/
------ img1/
------ img1Depth/
------ det/

The folder img1Depth stores the normalized disparity in .npy format. See Note. Originally, the paper runs the method on depth given by the MegaDepth depth estimator.

Given the above folder structure, generate the appearance features for your detections as described in the DeepSORT repository.

Running the method

The script run_forecast_filtering.sh will run the method with hyperparameters used in the paper. It will produce output .txt files in the MOT Challenge submission format. The bashscript has support for computing the metrics, but this has not been verified. Run the bashscript like the following:

bash run_forecast_filtering.sh experimentName

Note that in order to speed up code release, the dataset, preprocessed detections and output file paths are hardcoded in the files and will have to be manually changed.

Citing Detecting Invisible People

If you find this code useful in your research, please consider citing the following paper:

@inproceedings{khurana2021detecting,
  title={{Detecting Invisible People}},
  author={Khurana, Tarasha and Dave, Achal and Ramanan, Deva},
  booktitle={{IEEE/CVF International Conference on Computer Vision (ICCV)}},
  year={2021}
}

Warning

This is only the starter code that has not been cleaned for release. It currently only has verified support for running the method described in Detecting Invisible People, with the output tracks written in the MOT Challenge submission format. Although Top-k metric's code has been provided, this codebase does not guarantee support for the metric yet.

The hope is that you are able to benchmark this method for your CVPR 2022 submission and compute your own metrics on the method's output. If the method code does not work, please open an issue.

Note

Although it is easy to run any monocular depth estimator and store their output (usually given as disparity) in an .npy file, I have added a script in tools/demo_images.py which can save the .npy files for you. Note that this script should be run after setting up the MegaDepth codebase and copying this file to its root directory. I will likely also release my own depth maps for the MOT17 dataset over the Halloween weekend.

If you try to run the metrics, I have given my groundtruth JSON (as expected by pycocotools).

Owner
Tarasha Khurana
Tarasha Khurana
Model of an AI powered sign language interpreter.

TEXT AND SPEECH TO SIGN LANGUAGE. A web application which takes in text or live audio speech recording as input, converts and displays the relevant Si

Mark Gatere 4 Mar 30, 2022
External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 EAMLP will come soon Jitto

MenghaoGuo 357 Dec 11, 2022
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
Lightweight stereo matching network based on MobileNetV1 and MobileNetV2

MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

Cognitive Systems Research Group 139 Nov 30, 2022
Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL)

Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL) This repository contains all source code used to generate the results in the article "

Charlotte Loh 3 Jul 23, 2022
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 09, 2022
"Inductive Entity Representations from Text via Link Prediction" @ The Web Conference 2021

Inductive entity representations from text via link prediction This repository contains the code used for the experiments in the paper "Inductive enti

Daniel Daza 45 Jan 09, 2023
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"

PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)" Unofficial PyTorch Implementation of Progressi

Vitaliy Hramchenko 58 Dec 19, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

🔉 Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022
This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021.

Off-Belief Learning Introduction This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021. Environment Setup

Facebook Research 32 Jan 05, 2023
Official PyTorch repo for JoJoGAN: One Shot Face Stylization

JoJoGAN: One Shot Face Stylization This is the PyTorch implementation of JoJoGAN: One Shot Face Stylization. Abstract: While there have been recent ad

1.3k Dec 29, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Code for "AutoMTL: A Programming Framework for Automated Multi-Task Learning"

AutoMTL: A Programming Framework for Automated Multi-Task Learning This is the website for our paper "AutoMTL: A Programming Framework for Automated M

Ivy Zhang 40 Dec 04, 2022
A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Hyunsoo Cho 1 Dec 20, 2021
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
This folder contains the implementation of the multi-relational attribute propagation algorithm.

MrAP This folder contains the implementation of the multi-relational attribute propagation algorithm. It requires the package pytorch-scatter. Please

6 Dec 06, 2022
PyTorch and GPyTorch implementation of the paper "Conditioning Sparse Variational Gaussian Processes for Online Decision-making."

Conditioning Sparse Variational Gaussian Processes for Online Decision-making This repository contains a PyTorch and GPyTorch implementation of the pa

Wesley Maddox 16 Dec 08, 2022
A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow.

ConvNeXt A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow. A FacebookResearch Implementation on A Conv

Raghvender 2 Feb 14, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 07, 2023
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP

Wav2CLIP 🚧 WIP 🚧 Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP 📄 🔗 Ho-Hsiang Wu, Prem Seetharaman

Descript 240 Dec 13, 2022