Official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

Overview

Test-Agnostic Long-Tailed Recognition

This repository is the official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

  • TADE (our method) innovates the expert training scheme by introducing diversity-promoting expertise-guided losses, which train different experts to handle distinct class distributions. In this way, the learned experts would be more diverse than existing multi-expert methods, leading to better ensemble performance, and aggregatedly simulate a wide spectrum of possible class distributions.
  • TADE develops a new self-supervised method, namely prediction stability maximization, to adaptively aggregate these experts for better handling unknown test distribution, using unlabeled test class data.

Results

ImageNet-LT (ResNeXt-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc. Model
Softmax 4.26 48.0
RIDE 6.08 56.3
TADE (ours) 6.08 58.8 Download

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 4.26 66.1 60.3 48.0 34.9 27.6
RIDE 6.08 67.6 64.0 56.3 48.7 44.0
TADE (ours) 6.08 69.4 65.4 58.8 54.5 53.1

CIFAR100-Imbalance ratio 100 (ResNet-32)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 0.07 41.4
RIDE 0.11 48.0
TADE (ours) 0.11 49.8

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 0.07 62.3 56.2 41.4 25.8 17.5
RIDE 0.11 63.0 57.0 48.0 35.4 29.3
TADE (ours) 0.11 65.9 58.3 49.8 43.9 42.4

Places-LT (ResNet-152)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 11.56 31.4
RIDE 13.18 40.3
TADE (ours) 13.18 40.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 11.56 45.6 40.2 31.4 23.4 19.4
RIDE 13.18 43.1 41.6 40.3 38.2 36.9
TADE (ours) 13.18 46.4 43.3 40.9 41.4 41.6

iNaturalist 2018 (ResNet-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 4.14 64.7
RIDE 5.80 71.8
TADE (ours) 5.80 72.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-3 Forward-2 Uniform Backward-2 Backward-3
Softmax 4.14 65.4 65.5 64.7 64.0 63.4
RIDE 5.80 71.5 71.9 71.8 71.9 71.8
TADE (ours) 5.80 72.3 72.5 72.9 73.5 73.3

Requirements

  • To install requirements:
pip install -r requirements.txt

Hardware requirements

8 GPUs with >= 11G GPU RAM are recommended. Otherwise the model with more experts may not fit in, especially on datasets with more classes (the FC layers will be large). We do not support CPU training, but CPU inference could be supported by slight modification.

Datasets

Four bechmark datasets

  • Please download these datasets and put them to the /data file.
  • ImageNet-LT and Places-LT can be found at here.
  • iNaturalist data should be the 2018 version from here.
  • CIFAR-100 will be downloaded automatically with the dataloader.
data
├── ImageNet_LT
│   ├── test
│   ├── train
│   └── val
├── CIFAR100
│   └── cifar-100-python
├── Place365
│   ├── data_256
│   ├── test_256
│   └── val_256
└── iNaturalist 
    ├── test2018
    └── train_val2018

Txt files

  • We provide txt files for test-agnostic long-tailed recognition for ImageNet-LT, Places-LT and iNaturalist 2018. CIFAR-100 will be generated automatically with the code.
  • For iNaturalist 2018, please unzip the iNaturalist_train.zip.
data_txt
├── ImageNet_LT
│   ├── ImageNet_LT_backward2.txt
│   ├── ImageNet_LT_backward5.txt
│   ├── ImageNet_LT_backward10.txt
│   ├── ImageNet_LT_backward25.txt
│   ├── ImageNet_LT_backward50.txt
│   ├── ImageNet_LT_forward2.txt
│   ├── ImageNet_LT_forward5.txt
│   ├── ImageNet_LT_forward10.txt
│   ├── ImageNet_LT_forward25.txt
│   ├── ImageNet_LT_forward50.txt
│   ├── ImageNet_LT_test.txt
│   ├── ImageNet_LT_train.txt
│   ├── ImageNet_LT_uniform.txt
│   └── ImageNet_LT_val.txt
├── Places_LT_v2
│   ├── Places_LT_backward2.txt
│   ├── Places_LT_backward5.txt
│   ├── Places_LT_backward10.txt
│   ├── Places_LT_backward25.txt
│   ├── Places_LT_backward50.txt
│   ├── Places_LT_forward2.txt
│   ├── Places_LT_forward5.txt
│   ├── Places_LT_forward10.txt
│   ├── Places_LT_forward25.txt
│   ├── Places_LT_forward50.txt
│   ├── Places_LT_test.txt
│   ├── Places_LT_train.txt
│   ├── Places_LT_uniform.txt
│   └── Places_LT_val.txt
└── iNaturalist18
    ├── iNaturalist18_backward2.txt
    ├── iNaturalist18_backward3.txt
    ├── iNaturalist18_forward2.txt
    ├── iNaturalist18_forward3.txt
    ├── iNaturalist18_train.txt
    ├── iNaturalist18_uniform.txt
    └── iNaturalist18_val.txt 

Pretrained models

  • For the training on Places-LT, we follow previous method and use the pre-trained model.
  • Please download the checkpoint. Unzip and move the checkpoint files to /model/pretrained_model_places/.

Script

ImageNet-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_imagenet_lt_resnext50_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_imagenet.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_imagenet.py -c configs/test_time_imagenet_lt_resnext50_tade.json -r checkpoint_path

CIFAR100-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_cifar100_ir100_tade.json
  • One can change the imbalance ratio from 100 to 10/50 by changing the config file.

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_cifar.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_cifar.py -c configs/test_time_cifar100_ir100_tade.json -r checkpoint_path
  • One can change the imbalance ratio from 100 to 10/50 by changing the config file.

Places-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_places_lt_resnet152_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test_places.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_places.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_places.py -c configs/test_time_places_lt_resnet152_tade.json -r checkpoint_path

iNaturalist 2018

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_iNaturalist_resnet50_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_inat.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_inat.py -c configs/test_time_iNaturalist_resnet50_tade.json -r checkpoint_path

Citation

If you find our work inspiring or use our codebase in your research, please cite our work.

@article{zhang2021test,
  title={Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision},
  author={Zhang, Yifan and Hooi, Bryan and Hong, Lanqing and Feng, Jiashi},
  journal={arXiv},
  year={2021}
}

Acknowledgements

This is a project based on this pytorch template.

The mutli-expert framework are based on RIDE. The data generation of agnostic test class distributions takes references from LADE.

Owner
vanint
vanint
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
A Python package implementing a new model for text classification with visualization tools for Explainable AI :octocat:

A Python package implementing a new model for text classification with visualization tools for Explainable AI 🍣 Online live demos: http://tworld.io/s

Sergio Burdisso 285 Jan 02, 2023
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
Study German declensions (dER nettE Mann, ein nettER Mann, mit dEM nettEN Mann, ohne dEN nettEN Mann ...) Generate as many exercises as you want using the incredible power of SPACY!

Study German declensions (dER nettE Mann, ein nettER Mann, mit dEM nettEN Mann, ohne dEN nettEN Mann ...) Generate as many exercises as you want using the incredible power of SPACY!

Hans Alemão 4 Jul 20, 2022
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

Antlr Project 13.6k Jan 05, 2023
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 11.4k Jan 01, 2023
The Classical Language Toolkit

Notice: This Git branch (dev) contains the CLTK's upcoming major release (v. 1.0.0). See https://github.com/cltk/cltk/tree/master and https://docs.clt

Classical Language Toolkit 754 Jan 09, 2023
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.5k Dec 26, 2022
Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks

Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks. It takes raw videos/images + text as inputs, and outputs task predictions. ClipB

Jie Lei 雷杰 612 Jan 04, 2023
🎐 a python library for doing approximate and phonetic matching of strings.

jellyfish Jellyfish is a python library for doing approximate and phonetic matching of strings. Written by James Turk James Turk 1.8k Dec 21, 2022

Longformer: The Long-Document Transformer

Longformer Longformer and LongformerEncoderDecoder (LED) are pretrained transformer models for long documents. ***** New December 1st, 2020: Longforme

AI2 1.6k Dec 29, 2022
The proliferation of disinformation across social media has led the application of deep learning techniques to detect fake news.

Fake News Detection Overview The proliferation of disinformation across social media has led the application of deep learning techniques to detect fak

Kushal Shingote 1 Feb 08, 2022
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
TLA - Twitter Linguistic Analysis

TLA - Twitter Linguistic Analysis Tool for linguistic analysis of communities TLA is built using PyTorch, Transformers and several other State-of-the-

Tushar Sarkar 47 Aug 14, 2022
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.

Welcome to AdaptNLP A high level framework and library for running, training, and deploying state-of-the-art Natural Language Processing (NLP) models

Novetta 407 Jan 03, 2023
Universal End2End Training Platform, including pre-training, classification tasks, machine translation, and etc.

背景 安装教程 快速上手 (一)预训练模型 (二)机器翻译 (三)文本分类 TenTrans 进阶 1. 多语言机器翻译 2. 跨语言预训练 背景 TrenTrans是一个统一的端到端的多语言多任务预训练平台,支持多种预训练方式,以及序列生成和自然语言理解任务。 安装教程 git clone git

Tencent Minority-Mandarin Translation Team 42 Dec 20, 2022
Python library for interactive topic model visualization. Port of the R LDAvis package.

pyLDAvis Python library for interactive topic model visualization. This is a port of the fabulous R package by Carson Sievert and Kenny Shirley. pyLDA

Ben Mabey 1.7k Dec 20, 2022
MRC approach for Aspect-based Sentiment Analysis (ABSA)

B-MRC MRC approach for Aspect-based Sentiment Analysis (ABSA) Paper: Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet Extracti

Phuc Phan 1 Apr 05, 2022