This repository contains code for the paper "Decoupling Representation and Classifier for Long-Tailed Recognition", published at ICLR 2020

Overview

Classifier-Balancing

This repository contains code for the paper:

Decoupling Representation and Classifier for Long-Tailed Recognition
Bingyi Kang, Saining Xie,Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, Yannis Kalantidis
[OpenReview] [Arxiv] [PDF] [Slides] [@ICLR]
Facebook AI Research, National University of Singapore
International Conference on Learning Representations (ICLR), 2020

Abstract

The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability with relative ease by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

 

 

If you find this code useful, consider citing our work:

@inproceedings{kang2019decoupling,
  title={Decoupling representation and classifier for long-tailed recognition},
  author={Kang, Bingyi and Xie, Saining and Rohrbach, Marcus and Yan, Zhicheng
          and Gordo, Albert and Feng, Jiashi and Kalantidis, Yannis},
  booktitle={Eighth International Conference on Learning Representations (ICLR)},
  year={2020}
}

Requirements

The code is based on https://github.com/zhmiao/OpenLongTailRecognition-OLTR.

Dataset

  • ImageNet_LT and Places_LT

    Download the ImageNet_2014 and Places_365.

  • iNaturalist 2018

    • Download the dataset following here.
    • cd data/iNaturalist18, Generate image name files with this script or use the existing ones [here].

Change the data_root in main.py accordingly.

Representation Learning

  1. Instance-balanced Sampling
python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml
  1. Class-balanced Sampling
python main.py --cfg ./config/ImageNet_LT/feat_balance.yaml
  1. Square-root Sampling
python main.py --cfg ./config/ImageNet_LT/feat_squareroot.yaml
  1. Progressively-balancing Sampling
python main.py --cfg ./config/ImageNet_LT/feat_shift.yaml

Test the joint learned classifier with representation learning

python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml --test 

Classifier Learning

  1. Nearest Class Mean classifier (NCM).
python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml --test --knn
  1. Classifier Re-training (cRT)
python main.py --cfg ./config/ImageNet_LT/cls_crt.yaml --model_dir ./logs/ImageNet_LT/models/resnext50_uniform_e90
python main.py --cfg ./config/ImageNet_LT/cls_crt.yaml --test
  1. Tau-normalization

Extract fatures

for split in train_split val test
do
  python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml --test --save_feat $split
done

Evaluation

for split in train val test
do
  python tau_norm.py --root ./logs/ImageNet_LT/models/resnext50_uniform_e90/ --type $split
done
  1. Learnable weight scaling (LWS)
python main.py --cfg ./config/ImageNet_LT/cls_lws.yaml --model_dir ./logs/ImageNet_LT/models/resnext50_uniform_e90
python main.py --cfg ./config/ImageNet_LT/cls_lws.yaml --test

Results and Models

ImageNet_LT

  • Representation learning

    Sampling Many Medium Few All Model
    Instance-Balanced 65.9 37.5 7.7 44.4 ResNeXt50
    Class-Balanced 61.8 40.1 15.5 45.1 ResNeXt50
    Square-Root 64.3 41.2 17.0 46.8 ResNeXt50
    Progressively-Balanced 61.9 43.2 19.4 47.2 ResNeXt50

    For other models trained with instance-balanced (natural) sampling:
    [ResNet50] [ResNet101] [ResNet152] [ResNeXt101] [ResNeXt152]

  • Classifier learning

    Classifier Many Medium Few All Model
    Joint 65.9 37.5 7.7 44.4 ResNeXt50
    NCM 56.6 45.3 28.1 47.3 ResNeXt50
    cRT 61.8 46.2 27.4 49.6 ResNeXt50
    Tau-normalization 59.1 46.9 30.7 49.4 ResNeXt50
    LWS 60.2 47.2 30.3 49.9 ResNeXt50

iNaturalist 2018

Places_LT

  • Representaion learning
    We provide a pretrained ResNet152 with instance-balanced (natural) sampling: [link]
  • Classifier learning
    We provide the cRT and LWS models based on above pretrained ResNet152 model as follows:
    [ResNet152(cRT)] [ResNet152(LWS)]

To test a pretrained model:
python main.py --cfg /path/to/config/file --model_dir /path/to/model/file --test

License

This project is licensed under the license found in the LICENSE file in the root directory of this source tree (here). Portions of the source code are from the OLTR project.

Owner
Facebook Research
Facebook Research
Prometheus exporter for Cisco Unified Computing System (UCS) Manager

prometheus-ucs-exporter Overview Use metrics from the UCS API to export relevant metrics to Prometheus This repository is a fork of Drew Stinnett's or

Marshall Wace 6 Nov 07, 2022
Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo.

PyLabel pip install pylabel PyLabel is a Python package to help you prepare image datasets for computer vision models including PyTorch and YOLOv5. I

PyLabel Project 176 Jan 01, 2023
DIR-GNN - Discovering Invariant Rationales for Graph Neural Networks

DIR-GNN "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)

Ying-Xin (Shirley) Wu 70 Nov 13, 2022
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )

GNeRF This repository contains official code for the ICCV 2021 paper: GNeRF: GAN-based Neural Radiance Field without Posed Camera. This implementation

Quan Meng 191 Dec 26, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

54 Dec 15, 2022
Accurate Phylogenetic Inference with Symmetry-Preserving Neural Networks

Accurate Phylogenetic Inference with a Symmetry-preserving Neural Network Model Claudia Solis-Lemus Shengwen Yang Leonardo Zepeda-Núñez This repositor

Leonardo Zepeda-Núñez 2 Feb 11, 2022
A library for augmentation of a YOLO-formated dataset

YOLO Dataset Augmentation lib Инструкция по использованию этой библиотеки Запуск всех файлов осуществлять из консоли. GoogleCrawl_to_Dataset.py Это ск

Egor Orel 1 Dec 10, 2022
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
Code for the paper: Fighting Fake News: Image Splice Detection via Learned Self-Consistency

Fighting Fake News: Image Splice Detection via Learned Self-Consistency [paper] [website] Minyoung Huh *12, Andrew Liu *1, Andrew Owens1, Alexei A. Ef

minyoung huh (jacob) 174 Dec 09, 2022
Pathdreamer: A World Model for Indoor Navigation

Pathdreamer: A World Model for Indoor Navigation This repository hosts the open source code for Pathdreamer, to be presented at ICCV 2021. Paper | Pro

Google Research 122 Jan 04, 2023
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
Example how to deploy deep learning model with aiohttp.

aiohttp-demos Demos for aiohttp project. Contents Imagetagger Deep Learning Image Classifier URL shortener Toxic Comments Classifier Moderator Slack B

aio-libs 661 Jan 04, 2023
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
A framework that allows people to write their own Rocket League bots.

YOU PROBABLY SHOULDN'T PULL THIS REPO Bot Makers Read This! If you just want to make a bot, you don't need to be here. Instead, start with one of thes

543 Dec 20, 2022
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 05, 2022
Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Vansh Wassan 15 Jun 17, 2021
Install alphafold on the local machine, get out of docker.

AlphaFold This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP

Kui Xu 73 Dec 13, 2022
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

MOSES 656 Dec 29, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian

117 Jan 07, 2023
High-fidelity 3D Model Compression based on Key Spheres

High-fidelity 3D Model Compression based on Key Spheres This repository contains the implementation of the paper: Yuanzhan Li, Yuqi Liu, Yujie Lu, Siy

5 Oct 11, 2022