OpenLT: An open-source project for long-tail classification

Related tags

Deep LearningOpenLT
Overview

OpenLT: An open-source project for long-tail classification

Supported Methods for Long-tailed Recognition:

Reproduce Results

Here we simply show part of results to prove that our implementation is reasonable.

ImageNet-LT

Method Backbone Reported Result Our Implementation
CE ResNet-10 34.8 35.3
Decouple-cRT ResNet-10 41.8 41.8
Decouple-LWS ResNet-10 41.4 41.6
BalanceSoftmax ResNet-10 41.8 41.4
CE ResNet-50 41.6 43.2
LDAM-DRW* ResNet-50 48.8 51.2
Decouple-cRT ResNet-50 47.3 48.7
Decouple-LWS ResNet-50 47.7 49.3

CIFAR100-LT (Imbalance Ratio 100)

${\dagger}$ means the reported results are copied from LADE

Method Datatset Reported Result Our Implementation
CE CIFAR100-LT 39.1 40.3
LDAM-DRW CIFAR100-LT 42.04 42.9
LogitAdjust CIFAR100-LT 43.89 45.3
BalanceSoftmax$^{\dagger}$ CIFAR100-LT 45.1 46.47

Requirement

Packages

  • Python >= 3.7, < 3.9
  • PyTorch >= 1.6
  • tqdm (Used in test.py)
  • tensorboard >= 1.14 (for visualization)
  • pandas
  • numpy

Dataset Preparation

CIFAR code will download data automatically with the dataloader. We use data the same way as classifier-balancing. For ImageNet-LT and iNaturalist, please prepare data in the data directory. ImageNet-LT can be found at this link. iNaturalist data should be the 2018 version from this repo (Note that it requires you to pay to download now). The annotation can be found at here. Please put them in the same location as below:

data
├── cifar-100-python
│   ├── file.txt~
│   ├── meta
│   ├── test
│   └── train
├── cifar-100-python.tar.gz
├── ImageNet_LT
│   ├── ImageNet_LT_open.txt
│   ├── ImageNet_LT_test.txt
│   ├── ImageNet_LT_train.txt
│   ├── ImageNet_LT_val.txt
│   ├── Tiny_ImageNet_LT_train.txt (Optional)
│   ├── Tiny_ImageNet_LT_val.txt (Optional)
│   ├── Tiny_ImageNet_LT_test.txt (Optional)
│   ├── test
│   ├── train
│   └── val
└── iNaturalist18
    ├── iNaturalist18_train.txt
    ├── iNaturalist18_val.txt
    └── train_val2018

Training and Evaluation Instructions

Single Stage Training

python train.py -c path_to_config_file

For example, to train a model with LDAM Loss on CIFAR-100-LT:

python train.py -c configs/CIFAR-100/LDAMLoss.json

Decouple Training (Stage-2)

python train.py -c path_to_config_file -crt path_to_stage_one_checkpoints

For example, to train a model with LWS classifier on ImageNet-LT:

python train.py -c configs/ImageNet-LT/R50_LWS.json -lws path_to_stage_one_checkpoints

Test

To test a checkpoint, please put it with the corresponding config file.

python test.py -r path_to_checkpoint

resume

python train.py -c path_to_config_file -r path_to_resume_checkpoint

Please see the pytorch template that we use for additional more general usages of this project

FP16 Training

If you set fp16 in utils/util.py, it will enable fp16 training. However, this is susceptible to change (and may not work on all settings or models) and please double check if you are using it since we don't plan to focus on this part if you request help. Only some models work (see autograd in the code). We do not plan to provide support on this because it is not within our focus (just for faster training and less memory requirement). In our experiments, the use of FP16 training does not reduce the accuracy of the model, regardless of whether it is a small dataset (CIFAR-LT) or a large dataset(ImageNet_LT, iNaturalist).

Visualization

We use tensorboard as a visualization tool, and provide the accuracy changes of each class and different groups during the training process:

tensorboard --logdir path_to_dir

We also provide the simple code to visualize feature distribution using t-SNE and calibration using the reliability diagrams, please check the parameters in plot_tsne.py and plot_ece.py, and then run:

python plot_tsne.py

or

python plot_ece.py

Pytorch template

This is a project based on this pytorch template. The readme of the template explains its functionality, although we try to list most frequently used ones in this readme.

License

This project is licensed under the MIT License. See LICENSE for more details. The parts described below follow their original license.

Acknowledgements

This project is mainly based on RIDE's code base. In the process of reproducing and organizing the code, it also refers to some other excellent code repositories, such as decouple and LDAM.

Owner
Ming Li
Ming Li
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
QueryInst: Parallelly Supervised Mask Query for Instance Segmentation

QueryInst is a simple and effective query based instance segmentation method driven by parallel supervision on dynamic mask heads, which outperforms previous arts in terms of both accuracy and speed.

Hust Visual Learning Team 386 Jan 08, 2023
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
Implementation of the method proposed in the paper "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation"

Neural Descriptor Fields (NDF) PyTorch implementation for training continuous 3D neural fields to represent dense correspondence across objects, and u

167 Jan 06, 2023
領域を指定し、キーを入力することで画像を保存するツールです。クラス分類用のデータセット作成を想定しています。

image-capture-class-annotation 領域を指定し、キーを入力することで画像を保存するツールです。 クラス分類用のデータセット作成を想定しています。 Requirement OpenCV 3.4.2 or later Usage 実行方法は以下です。 起動後はマウスクリック4

KazuhitoTakahashi 5 May 28, 2021
Team Enigma at ArgMining 2021 Shared Task: Leveraging Pretrained Language Models for Key Point Matching

Team Enigma at ArgMining 2021 Shared Task: Leveraging Pretrained Language Models for Key Point Matching This is our attempt of the shared task on Quan

Manav Nitin Kapadnis 12 Jul 08, 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 06, 2023
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
A deep learning CNN model to identify and classify and check if a person is wearing a mask or not.

Face Mask Detection The Model is designed to check if any human is wearing a mask or not. Dataset Description The Dataset contains a total of 11,792 i

1 Mar 01, 2022
⚾🤖⚾ Automatic baseball pitching overlay in realtime

⚾ Automatically overlaying pitch motion and trajectory with machine learning! This project takes your baseball pitching clips and automatically genera

Tony Chou 240 Dec 05, 2022
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning

Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning This repository provides an implementation of the paper Beta S

Yongchan Kwon 28 Nov 10, 2022
Pytorch implementation of MaskFlownet

MaskFlownet-Pytorch Unofficial PyTorch implementation of MaskFlownet (https://github.com/microsoft/MaskFlownet). Tested with: PyTorch 1.5.0 CUDA 10.1

Daniele Cattaneo 84 Nov 02, 2022
Official repository of the paper 'Essentials for Class Incremental Learning'

Essentials for Class Incremental Learning Official repository of the paper 'Essentials for Class Incremental Learning' This Pytorch repository contain

33 Nov 27, 2022
This is a TensorFlow implementation for C2-Rec

This is a TensorFlow implementation for C2-Rec We refer to the repo SASRec. Requirements requirement.txt Datasets This repo includes Amazon Beauty dat

7 Nov 14, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
This repository is to support contributions for tools for the Project CodeNet dataset hosted in DAX

The goal of Project CodeNet is to provide the AI-for-Code research community with a large scale, diverse, and high quality curated dataset to drive innovation in AI techniques.

International Business Machines 1.2k Jan 04, 2023
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 04, 2020
The MLOps platform for innovators 🚀

​ DS2.ai is an integrated AI operation solution that supports all stages from custom AI development to deployment. It is an AI-specialized platform service that collects data, builds a training datas

9 Jan 03, 2023
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022