Code for "The Box Size Confidence Bias Harms Your Object Detector"

Overview

The Box Size Confidence Bias Harms Your Object Detector - Code

Disclaimer: This repository is for research purposes only. It is designed to maintain reproducibility of the experiments described in "The Box Size Confidence Bias Harms Your Object Detector".

Setup

Download Annotations

Download COCO2017 annotations for train, val, and tes-dev from here and move them into the folder structure like this (alternatively change the config in config/all/paths/annotations/coco_2017.yaml to your local folder structure):

 .
 └── data
   └── coco
      └── annotations
        ├── instances_train2017.json
        ├── instances_val2017.json
        └── image_info_test-dev2017.json

Generate Detections

Generate detections on the train, val, and test-dev COCO2017 set, save them in the COCO file format as JSON files. Move detections to data/detections/MODEL_NAME, see config/all/detections/default_all.yaml for all the used detectors and to add other detectors.
The official implementations for the used detectors are:

Examples

CenterNet (Hourglass)

To generate the Detections for CenterNet with Hourglass backbone first follow the installation instructions. Then download ctdet_coco_hg.pth to /models from the official source Then generate the detections from the /src folder:

test_train.py python3 test_train.py ctdet --arch hourglass --exp_id Centernet_HG_train --dataset coco --load_model ../models/ctdet_coco_hg.pth ">
# On val
python3 test.py ctdet --arch hourglass --exp_id Centernet_HG_val --dataset coco --load_model ../models/ctdet_coco_hg.pth 
# On test-dev
python3 test.py ctdet --arch hourglass --exp_id Centernet_HG_test-dev --dataset coco --load_model ../models/ctdet_coco_hg.pth --trainval
# On train
sed '56s/.*/  split = "train"/' test.py > test_train.py
python3 test_train.py ctdet --arch hourglass --exp_id Centernet_HG_train --dataset coco --load_model ../models/ctdet_coco_hg.pth

The scaling for TTA is set via the "--test_scales LIST_SCALES" flag. So to generate only the 0.5x-scales: --test_scales 0.5

RetinaNet with MMDetection

To generate the de detection files using mmdet, first follow the installation instructions. Then download specific model weights, in this example retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth to PATH_TO_DOWNLOADED_WEIGHTS and execute the following commands:

python3 tools/test.py configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py PATH_TO_DOWNLOADED_WEIGHTS/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth  --eval bbox --eval-options jsonfile_prefix='PATH_TO_THIS_REPO/detections/retinanet_x101_64x4d_fpn_2x/train2017' --cfg-options data.test.img_prefix='PATH_TO_COCO_IMGS/train2017' data.test.ann_file='PATH_TO_COCO_ANNS/annotations/instances_train2017.json'
python3 tools/test.py configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py PATH_TO_DOWNLOADED_WEIGHTS/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth  --eval bbox --eval-options jsonfile_prefix='PATH_TO_THIS_REPO/detections/retinanet_x101_64x4d_fpn_2x/val2017' --cfg-options data.test.img_prefix='PATH_TO_COCO_IMGS/val2017' data.test.ann_file='PATH_TO_COCO_ANNS/annotations/instances_val2017.json'
python3 tools/test.py configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py PATH_TO_DOWNLOADED_WEIGHTS/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth  --eval bbox --eval-options jsonfile_prefix='PATH_TO_THIS_REPO/detections/retinanet_x101_64x4d_fpn_2x/test-dev2017' --cfg-options data.test.img_prefix='PATH_TO_COCO_IMGS/test2017' data.test.ann_file='PATH_TO_COCO_ANNS/annotations/image_info_test-dev2017.json'

Install Dependencies

pip3 install -r requirements.txt
Optional Dependencies
# Faster coco evaluation (used if available)
pip3 install fast_coco_eval
# Parallel multi-runs, if enough RAM is available (add "hydra/launcher=joblib" to every command with -m flag)
pip install hydra-joblib-launcher

Experiments

Most of the experiments are performed using the CenterNet(HG) detections to change the detector add detections=OTHER_DETECTOR, with the location of OTHER_DETECTORs detections specified in config/all/detections/default_all.yaml. The results of each experiment are saved to outputs/EXPERIMENT/DATE and multirun/EXPERIMENT/DATE in the case of a multirun (-m flag).

Figure 2: Calibration curve of histogram binning and modified version

# original histogram binning calibration curve
python3 create_plots.py -cn plot_org_hist_bin
# modified histogram binning calibration curve:
python3 create_plots.py -cn plot_mod_hist_bin

Table 1: Ablation of histogram binning modifications

python3 calibrate.py -cn ablate_modified_hist 

Table 2: Ablation of optimization metrics of calibration on validation split

python3 calibrate.py -cn ablate_metrics  "seed=range(4,14)" -m

Figure 3: Bounding box size bias on train and val data detections

Plot of calibration curve:

# on validation data
python3 create_plots.py -cn plot_miscal name="plot_miscal_val" split="val"
# on train data:
python3 create_plots.py -cn plot_miscal name="plot_miscal_train" split="train" calib.conf_bins=20

Table 3: Ablation of optimization metrics of calibration on training data

python3 calibrate.py -cn explore_train

Table 4: Effect of individual calibration on TTA

  1. Generate detections (on train and val split) for each scale-factor individually (CenterNet_HG_TTA_050, CenterNet_HG_TTA_075, CenterNet_HG_TTA_100, CenterNet_HG_TTA_125, CenterNet_HG_TTA_150) and for complete TTA (CenterNet_HG_TTA_ens)

  2. Generate individually calibrated detections..

    python3 calibrate.py -cn calibrate_train name="calibrate_train_tta" detector="CenterNet_HG_TTA_050","CenterNet_HG_TTA_075","CenterNet_HG_TTA_100","CenterNet_HG_TTA_125","CenterNet_HG_TTA_150","CenterNet_HG_TTA_ens" -m
  3. Copy calibrated detections from multirun/calibrate_train_tta/DATE/MODEL_NAME/quantile_spline_ontrain_opt_tradeoff_full/val/MODEL_NAME.json to data/calibrated/MODEL_NAME/val/results.json for MODEL_NAME in (CenterNet_HG_TTA_050, CenterNet_HG_TTA_075, CenterNet_HG_TTA_100, CenterNet_HG_TTA_125, CenterNet_HG_TTA_150).

  4. Generate TTA of calibrated detections

    python3 enseble.py -cn enseble

Figure 4: Ablation of IoU threshold

python3 calibrate.py -cn calibrate_train name="ablate_iou" "iou_threshold=range(0.5,0.96,0.05)" -m

Table 5: Calibration method on different model

python3 calibrate.py -cn calibrate_train name="calibrate_all_models" detector=LIST_ALL_MODELS -m

The test-dev predictions are found in multirun/calibrate_all_models/DATE/MODEL_NAME/quantile_spline_ontrain_opt_tradeoff_full/test/MODEL_NAME.json and can be evaluated using the official evaluation sever.

Supplementary Material

A.Figure 5 & 6: Performance Change for Extended Optimization Metrics

python3 calibrate.py -cn ablate_metrics_extended  "seed=range(4,14)" -m

A.Table 6: Influence of parameter search spaces on performance gain

# Results for B0, C0
python3 calibrate.py -cn calibrate_train
# Results for B0, C1
python3 calibrate.py -cn calibrate_train_larger_cbins
# Results for B0 union B1, C0
python3 calibrate.py -cn calibrate_train_larger_bbins
# Results for B0 union B1, C0 union C1
python3 calibrate.py -cn calibrate_train_larger_cbbins

A.Table 7: Influence of calibration method on different sized versions of EfficientDet

python3 calibrate.py -cn calibrate_train name="influence_modelsize" detector="Efficientdet_D0","Efficientdet_D1","Efficientdet_D2","Efficientdet_D3","Efficientdet_D4","Efficientdet_D5","Efficientdet_D6","Efficientdet_D7" -m
You might also like...
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

a delightful machine learning tool that allows you to train, test and use models without writing code
a delightful machine learning tool that allows you to train, test and use models without writing code

igel A delightful machine learning tool that allows you to train/fit, test and use models without writing code Note I'm also working on a GUI desktop

Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.

Auto-ViML Automatically Build Variant Interpretable ML models fast! Auto_ViML is pronounced "auto vimal" (autovimal logo created by Sanket Ghanmare) N

Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Owner
Johannes G.
Johannes G.
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
计算机视觉中用到的注意力模块和其他即插即用模块PyTorch Implementation Collection of Attention Module and Plug&Play Module

PyTorch实现多种计算机视觉中网络设计中用到的Attention机制,还收集了一些即插即用模块。由于能力有限精力有限,可能很多模块并没有包括进来,有任何的建议或者改进,可以提交issue或者进行PR。

PJDong 599 Dec 23, 2022
Differentiable Factor Graph Optimization for Learning Smoothers @ IROS 2021

Differentiable Factor Graph Optimization for Learning Smoothers Overview Status Setup Datasets Training Evaluation Acknowledgements Overview Code rele

Brent Yi 60 Nov 14, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

pmapper pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and a

NASA Jet Propulsion Laboratory 8 Nov 06, 2022
Research using Cirq!

ReCirq Research using Cirq! This project contains modules for running quantum computing applications and experiments through Cirq and Quantum Engine.

quantumlib 230 Dec 29, 2022
Implementation of "JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting"

JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting Pytorch implementation for the paper "JOKR: Joint Keypoint Repres

45 Dec 25, 2022
Repository for code and dataset for our EMNLP 2021 paper - “So You Think You’re Funny?”: Rating the Humour Quotient in Standup Comedy.

AI-OpenMic Dataset The dataset is available for download via the follwing link. Repository for code and dataset for our EMNLP 2021 paper - “So You Thi

6 Oct 26, 2022
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals

LapDepth-release This repository is a Pytorch implementation of the paper "Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals" M

Minsoo Song 205 Dec 30, 2022
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

CoVA: Context-aware Visual Attention for Webpage Information Extraction Abstract Webpage information extraction (WIE) is an important step to create k

Keval Morabia 41 Jan 01, 2023
Enhancing Knowledge Tracing via Adversarial Training

Enhancing Knowledge Tracing via Adversarial Training This repository contains source code for the paper "Enhancing Knowledge Tracing via Adversarial T

Xiaopeng Guo 14 Oct 24, 2022
This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationships.

Auto-Lambda This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationship

Shikun Liu 76 Dec 20, 2022
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
Tensorflow Implementation of Pixel Transposed Convolutional Networks (PixelTCN and PixelTCL)

Pixel Transposed Convolutional Networks Created by Hongyang Gao, Hao Yuan, Zhengyang Wang and Shuiwang Ji at Texas A&M University. Introduction Pixel

Hongyang Gao 95 Jul 24, 2022
A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN

A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN Please follow Faster R-CNN and DAF to complete the environment confi

2 Jan 12, 2022
Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

S2AND This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System f

AI2 54 Nov 28, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Joint project of the duo Hacker Ninjas

Project Smoothie Společný projekt dua Hacker Ninjas. První pokus o hříčku po třech týdnech učení se programování. Jakub Kolář e:\

Jakub Kolář 2 Jan 07, 2022
This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices.

GBW This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices. Ap

Andi Han 0 Oct 22, 2021