SWA Object Detection

Overview

SWA Object Detection

This project hosts the scripts for training SWA object detectors, as presented in our paper:

@article{zhang2020swa,
  title={SWA Object Detection},
  author={Zhang, Haoyang and Wang, Ying and Dayoub, Feras and S{\"u}nderhauf, Niko},
  journal={arXiv preprint arXiv:2012.12645},
  year={2020}
}

The full paper is available at: https://arxiv.org/abs/2012.12645.

Introduction

Do you want to improve 1.0 AP for your object detector without any inference cost and any change to your detector? Let us tell you such a recipe. It is surprisingly simple: train your detector for an extra 12 epochs using cyclical learning rates and then average these 12 checkpoints as your final detection model. This potent recipe is inspired by Stochastic Weights Averaging (SWA), which is proposed in [1] for improving generalization in deep neural networks. We found it also very effective in object detection. In this work, we systematically investigate the effects of applying SWA to object detection as well as instance segmentation. Through extensive experiments, we discover a good policy of performing SWA in object detection, and we consistently achieve ~1.0 AP improvement over various popular detectors on the challenging COCO benchmark. We hope this work will make more researchers in object detection know this technique and help them train better object detectors.

SWA Object Detection: averaging multiple detection models leads to a better one.

Updates

  • 2020.01.08 Reimplement the code and now it is more convenient, more flexible and easier to perform both the conventional training and SWA training. See Instructions.
  • 2020.01.07 Update to MMDetection v2.8.0.
  • 2020.12.24 Release the code.

Installation

  • This project is based on MMDetection. Therefore the installation is the same as original MMDetection.

  • Please check get_started.md for installation. Note that you should change the version of PyTorch and CUDA to yours when installing mmcv in step 3 and clone this repo instead of MMdetection in step 4.

  • If you run into problems with pycocotools, please install it by:

    pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"
    

Usage of MMDetection

MMDetection provides colab tutorial, and full guidance for quick run with existing dataset and with new dataset for beginners. There are also tutorials for finetuning models, adding new dataset, designing data pipeline, customizing models, customizing runtime settings and useful tools.

Please refer to FAQ for frequently asked questions.

Instructions

We add a SWA training phase to the object detector training process, implement a SWA hook that helps process averaged models, and write a SWA config for conveniently deploying SWA training in training various detectors. We also provide many config files for reproducing the results in the paper.

By including the SWA config in detector config files and setting related parameters, you can have different SWA training modes.

  1. Two-pahse mode. In this mode, the training will begin with the traditional training phase, and it continues for epochs. After that, SWA training will start, with loading the best model on the validation from the previous training phase (becasue swa_load_from = 'best_bbox_mAP.pth'in the SWA config).

    As shown in swa_vfnet_r50 config, the SWA config is included at line 4 and only the SWA optimizer is reset at line 118 in this script. Note that configuring parameters in local scripts will overwrite those values inherited from the SWA config.

    You can change those parameters that are included in the SWA config to use different optimizers or different learning rate schedules for the SWA training. For example, to use a different initial learning rate, say 0.02, you just need to set swa_optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) in the SWA config (global effect) or in the swa_vfnet_r50 config (local effect).

    To start the training, run:

    ./tools/dist_train.sh configs/swa/swa_vfnet_r50_fpn_1x_coco.py 8
    
    
  2. Only-SWA mode. In this mode, the traditional training is skipped and only the SWA training is performed. In general, this mode should work with a pre-trained detection model which you can download from the MMDetection model zoo.

    Have a look at the swa_mask_rcnn_r101 config. By setting only_swa_training = True and swa_load_from = mask_rcnn_pretraind_model, this script conducts only SWA training, starting from a pre-trained detection model. To start the training, run:

    ./tools/dist_train.sh configs/swa/swa_mask_rcnn_r101_fpn_2x_coco.py 8
    
    

In both modes, we have implemented the validation stage and saving functions for the SWA model. Thus, it would be easy to monitor the performance and select the best SWA model.

Results and Models

For your convenience, we provide the following SWA models. These models are obtained by averaging checkpoints that are trained with cyclical learning rates for 12 epochs.

Model bbox AP (val) segm AP (val)     Download    
SWA-MaskRCNN-R50-1x-0.02-0.0002-38.2-34.7 39.1, +0.9 35.5, +0.8 model | config
SWA-MaskRCNN-R101-1x-0.02-0.0002-40.0-36.1 41.0, +1.0 37.0, +0.9 model | config
SWA-MaskRCNN-R101-2x-0.02-0.0002-40.8-36.6 41.7, +0.9 37.4, +0.8 model | config
SWA-FasterRCNN-R50-1x-0.02-0.0002-37.4 38.4, +1.0 - model | config
SWA-FasterRCNN-R101-1x-0.02-0.0002-39.4 40.3, +0.9 - model | config
SWA-FasterRCNN-R101-2x-0.02-0.0002-39.8 40.7, +0.9 - model | config
SWA-RetinaNet-R50-1x-0.01-0.0001-36.5 37.8, +1.3 - model | config
SWA-RetinaNet-R101-1x-0.01-0.0001-38.5 39.7, +1.2 - model | config
SWA-RetinaNet-R101-2x-0.01-0.0001-38.9 40.0, +1.1 - model | config
SWA-FCOS-R50-1x-0.01-0.0001-36.6 38.0, +1.4 - model | config
SWA-FCOS-R101-1x-0.01-0.0001-39.2 40.3, +1.1 - model | config
SWA-FCOS-R101-2x-0.01-0.0001-39.1 40.2, +1.1 - model | config
SWA-YOLOv3(320)-D53-273e-0.001-0.00001-27.9 28.7, +0.8 - model | config
SWA-YOLOv3(680)-D53-273e-0.001-0.00001-33.4 34.2, +0.8 - model | config
SWA-VFNet-R50-1x-0.01-0.0001-41.6 42.8, +1.2 - model | config
SWA-VFNet-R101-1x-0.01-0.0001-43.0 44.3, +1.3 - model | config
SWA-VFNet-R101-2x-0.01-0.0001-43.5 44.5, +1.0 - model | config

Notes:

  • SWA-MaskRCNN-R50-1x-0.02-0.0002-38.2-34.7 means this SWA model is produced based on the pre-trained Mask RCNN model that has a ResNet50 backbone, is trained under 1x schedule with the initial learning rate 0.02 and ending learning rate 0.0002, and achieves 38.2 bbox AP and 34.7 mask AP on the COCO val2017 respectively. This SWA model acheives 39.1 bbox AP and 35.5 mask AP, which are higher than the pre-trained model by 0.9 bbox AP and 0.8 mask AP respectively. This rule applies to other object detectors.

  • In addition to these baseline detectors, SWA can also improve more powerful detectors. One example is VFNetX whose performance on the COCO val2017 is improved from 52.2 AP to 53.4 AP (+1.2 AP).

  • More detailed results including AP50 and AP75 can be found here.

Contributing

Any pull requests or issues are welcome.

Citation

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows:

@article{zhang2020swa,
  title={SWA Object Detection},
  author={Zhang, Haoyang and Wang, Ying and Dayoub, Feras and S{\"u}nderhauf, Niko},
  journal={arXiv preprint arXiv:2012.12645},
  year={2020}
}

Acknowledgment

Many thanks to Dr Marlies Hankel and MASSIVE HPC for supporting precious GPU computation resources!

We also would like to thank MMDetection team for producing this great object detection toolbox.

License

This project is released under the Apache 2.0 license.

References

[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018

Automated detection of anomalous exoplanet transits in light curve data.

Automatically detecting anomalous exoplanet transits This repository contains the source code for the paper "Automatically detecting anomalous exoplan

1 Feb 01, 2022
Collection of machine learning related notebooks to share.

ML_Notebooks Collection of machine learning related notebooks to share. Notebooks GAN_distributed_training.ipynb In this Notebook, TensorFlow's tutori

Sascha Kirch 14 Dec 22, 2022
Auxiliary Raw Net (ARawNet) is a ASVSpoof detection model taking both raw waveform and handcrafted features as inputs, to balance the trade-off between performance and model complexity.

Overview This repository is an implementation of the Auxiliary Raw Net (ARawNet), which is ASVSpoof detection system taking both raw waveform and hand

6 Jul 08, 2022
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
Pocsploit is a lightweight, flexible and novel open source poc verification framework

Pocsploit is a lightweight, flexible and novel open source poc verification framework

cckuailong 208 Dec 24, 2022
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

Gengshan Yang 59 Nov 25, 2022
BMVC 2021: This is the github repository for "Few Shot Temporal Action Localization using Query Adaptive Transformers" accepted in British Machine Vision Conference (BMVC) 2021, Virtual

FS-QAT: Few Shot Temporal Action Localization using Query Adaptive Transformer Accepted as Poster in BMVC 2021 This is an official implementation in P

Sauradip Nag 14 Dec 09, 2022
Repository for the Bias Benchmark for QA dataset.

BBQ Repository for the Bias Benchmark for QA dataset. Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Tho

ML² AT CILVR 18 Nov 18, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
To prepare an image processing model to classify the type of disaster based on the image dataset

Disaster Classificiation using CNNs bunnysaini/Disaster-Classificiation Goal To prepare an image processing model to classify the type of disaster bas

Bunny Saini 1 Jan 24, 2022
Object detection and instance segmentation toolkit based on PaddlePaddle.

Object detection and instance segmentation toolkit based on PaddlePaddle.

9.3k Jan 02, 2023
DLFlow is a deep learning framework.

DLFlow是一套深度学习pipeline,它结合了Spark的大规模特征处理能力和Tensorflow模型构建能力。利用DLFlow可以快速处理原始特征、训练模型并进行大规模分布式预测,十分适合离线环境下的生产任务。利用DLFlow,用户只需专注于模型开发,而无需关心原始特征处理、pipeline构建、生产部署等工作。

DiDi 152 Oct 27, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

dm_robotics: Libraries, tools, and tasks created and used for Robotics research at DeepMind. Package overview Package Summary Transformations Rigid bo

DeepMind 273 Jan 06, 2023
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
Python package for covariance matrices manipulation and Biosignal classification with application in Brain Computer interface

pyRiemann pyRiemann is a python package for covariance matrices manipulation and classification through Riemannian geometry. The primary target is cla

447 Jan 05, 2023
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
A lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look At CoefficienTs)

Real-time Instance Segmentation and Lane Detection This is a lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look

Jin 4 Dec 30, 2022
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution Kai Zhang, Jingyun Liang, Luc Van Gool, Radu Timofte Computer Vision Lab

Kai Zhang 804 Jan 08, 2023
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 04, 2022