SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

Related tags

Deep LearningSCALoss
Overview

SCALoss

PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022).

Introduction

corner_center_comp

  • IoU-based loss has the gradient vanish problem in the case of low overlapping bounding boxes with slow convergence speed.
  • Side Overlap can put more penalty for low overlapping bounding box cases and Corner Distance can speed up the convergence.
  • SCALoss, which combines Side Overlap and Corner Distance, can serve as a comprehensive similarity measure, leading to better localization performance and faster convergence speed.

Prerequisites

Install

Conda is not necessary for the installation. Nevertheless, the installation process here is described using it.

$ conda create -n sca-yolo python=3.8 -y
$ conda activate sca-yolo
$ git clone https://github.com/Turoad/SCALoss
$ cd SCALoss
$ pip install -r requirements.txt

Getting started

Train a model:

python train.py --data [dataset config] --cfg [model config] --weights [path of pretrain weights] --batch-size [batch size num]

For example, to train yolov3-tiny on COCO dataset from scratch with batch size=128.

python train.py --data coco.yaml --cfg yolov3-tiny.yaml --weights '' --batch-size 128

For multi-gpu training, it is recommended to use:

python -m torch.distributed.launch --nproc_per_node 4 train.py --img 640 --batch 32 --epochs 300 --data coco.yaml --weights '' --cfg yolov3.yaml --device 0,1,2,3

Test a model:

python val.py --data coco.yaml --weights runs/train/exp15/weights/last.pt --img 640 --iou-thres=0.65

Results and Checkpoints

YOLOv3-tiny

Model mAP
0.5:0.95
AP
0.5
AP
0.65
AP
0.75
AP
0.8
AP
0.9
IoU 18.8 36.2 27.2 17.3 11.6 1.9
GIoU
relative improv.(%)
18.8
0%
36.2
0%
27.1
-0.37%
17.6
1.73%
11.8
1.72%
2.1
10.53%
DIoU
relative improv.(%)
18.8
0%
36.4
0.55%
26.9
-1.1%
17.2
-0.58%
11.8
1.72%
1.9
0%
CIoU
relative improv.(%)
18.9
0.53%
36.6
1.1%
27.3
0.37%
17.2
-0.58%
11.6
0%
2.1
10.53%
SCA
relative improv.(%)
19.9
5.85%
36.6
1.1%
28.3
4.04%
19.1
10.4%
13.3
14.66%
2.7
42.11%

The convergence curves of different losses on YOLOV3-tiny: converge curve

YOLOv3

Model mAP
0.5:0.95
AP
0.5
AP
0.65
AP
0.75
AP
0.8
AP
0.9
IoU 44.8 64.2 57.5 48.8 41.8 20.7
GIoU
relative improv.(%)
44.7
-0.22%
64.4
0.31%
57.5
0%
48.5
-0.61%
42
0.48%
20.4
-1.45%
DIoU
relative improv.(%)
44.7
-0.22%
64.3
0.16%
57.5
0%
48.9
0.2%
42.1
0.72%
19.8
-4.35%
CIoU
relative improv.(%)
44.7
-0.22%
64.3
0.16%
57.5
0%
48.9
0.2%
41.7
-0.24%
19.8
-4.35%
SCA
relative improv.(%)
45.3
1.12%
64.1
-0.16%
57.9
0.7%
49.9
2.25%
43.3
3.59%
21.4
3.38%

YOLOV5s

comming soon

Citation

If our paper and code are beneficial to your work, please consider citing:

@inproceedings{zheng2022scaloss,
  title={SCALoss: Side and Corner Aligned Loss for Bounding Box Regression},
  author={Zheng, Tu and Zhao, Shuai and Liu, Yang and Liu, Zili and Cai, Deng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2022}
}

Acknowledgement

The code is modified from ultralytics/yolov3.

You might also like...
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Official PyTorch implementation of
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)

CMUA-Watermark The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022) arxiv. It is bas

Repository for
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, “Improving evidential deep learning via multi-task le

Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Owner
TuZheng
TuZheng
Bayesian dessert for Lasagne

Gelato Bayesian dessert for Lasagne Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the be

Maxim Kochurov 84 May 11, 2020
Repository for the Bias Benchmark for QA dataset.

BBQ Repository for the Bias Benchmark for QA dataset. Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Tho

ML² AT CILVR 18 Nov 18, 2022
Offline Reinforcement Learning with Implicit Q-Learning

Offline Reinforcement Learning with Implicit Q-Learning This repository contains the official implementation of Offline Reinforcement Learning with Im

Ilya Kostrikov 125 Dec 31, 2022
Retrieve and analysis data from SDSS (Sloan Digital Sky Survey)

Author: Behrouz Safari License: MIT sdss A python package for retrieving and analysing data from SDSS (Sloan Digital Sky Survey) Installation Install

Behrouz 3 Oct 28, 2022
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

87 Nov 29, 2022
A web porting for NVlabs' StyleGAN2, to facilitate exploring all kinds characteristic of StyleGAN networks

This project is a web porting for NVlabs' StyleGAN2, to facilitate exploring all kinds characteristic of StyleGAN networks. Thanks for NVlabs' excelle

K.L. 150 Dec 15, 2022
Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP.

Hire-Wave-MLP.pytorch Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP Resul

Nevermore 29 Oct 28, 2022
CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation

CoTr: Efficient 3D Medical Image Segmentation by bridging CNN and Transformer This is the official pytorch implementation of the CoTr: Paper: CoTr: Ef

218 Dec 25, 2022
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
PrimitiveNet: Primitive Instance Segmentation with Local Primitive Embedding under Adversarial Metric (ICCV 2021)

PrimitiveNet Source code for the paper: Jingwei Huang, Yanfeng Zhang, Mingwei Sun. [PrimitiveNet: Primitive Instance Segmentation with Local Primitive

Jingwei Huang 47 Dec 06, 2022
3D detection and tracking viewer (visualization) for kitti & waymo dataset

3D detection and tracking viewer (visualization) for kitti & waymo dataset

222 Jan 08, 2023
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022
Automatically creates genre collections for your Plex media

Plex Auto Genres Plex Auto Genres is a simple script that will add genre collection tags to your media making it much easier to search for genre speci

Shane Israel 63 Dec 31, 2022
Contra is a lightweight, production ready Tensorflow alternative for solving time series prediction challenges with AI

Contra AI Engine A lightweight, production ready Tensorflow alternative developed by Styvio styvio.com » How to Use · Report Bug · Request Feature Tab

styvio 14 May 25, 2022
A collection of Google research projects related to Federated Learning and Federated Analytics.

Federated Research Federated Research is a collection of research projects related to Federated Learning and Federated Analytics. Federated learning i

Google Research 483 Jan 05, 2023
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

LMMNN Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks This is the working dire

Giora Simchoni 10 Nov 02, 2022
Code of paper: "DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks"

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks Abstract: Adversarial training has been proven to

倪仕文 (Shiwen Ni) 58 Nov 10, 2022
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

RAVE: Realtime Audio Variational autoEncoder Official implementation of RAVE: A variational autoencoder for fast and high-quality neural audio synthes

ACIDS 587 Jan 01, 2023