Semi-Supervised Learning for Fine-Grained Classification

Overview

Semi-Supervised Learning for Fine-Grained Classification

This repo contains the code of:

  • A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification, Jong-Chyi Su, Zezhou Cheng, and Subhransu Maji, CVPR 2021. [paper, poster, slides]
  • Semi-Supervised Learning with Taxonomic Labels, Jong-Chyi Su and Subhransu Maji, BMVC 2021. [paper, slides]

Preparing Datasets and Splits

We used the following datasets in the paper:

In addition the repository contains a new Semi-iNat dataset corresponding to the FGVC8 semi-supervised challenge:

  • Semi-iNat: This is a new dataset for the Semi-iNat Challenge at FGVC8 workshop at CVPR 2021. Different from Semi-Aves, Semi-iNat has more species from different kingdoms, and does not include in or out-of-domain label. For more details please see the challenge website.

The splits of each of these datasets can be found under data/${dataset}/${split}.txt corresponding to:

  • l_train -- labeled in-domain data
  • u_train_in -- unlabeled in-domain data
  • u_train_out -- unlabeled out-of-domain data
  • u_train (combines u_train_in and u_train_out)
  • val -- validation set
  • l_train_val (combines l_train and val)
  • test -- test set

Each line in the text file has a filename and the corresponding class label.

Please download the datasets from the corresponding websites. For Semi-Aves, put the data under data/semi_aves. FFor Semi-Fungi and Semi-CUB, download the images and put them under data/semi_fungi/images and data/cub/images.

Note 1: For the experiments on Semi-Fungi reported in the paper, the images are resized to a maximum of 300px for each side.
Note 2: We reported the results of another split of Semi-Aves in the appendix (for cross-validation), but we do not release the labels because it will leak the labels for unlabeled data.
Note 3: We also provide the species names of Semi-Aves under data/semi_aves_species_names.txt, and the species names of Semi-Fungi. The names were not shared in the competetion.

Training and Evaluation (CVPR paper)

We provide the code for all the methods included in the paper, except for FixMatch and MoCo. This includes methods of supervised training, self-training, PL, and curriculum PL. This code is developed based on this PyTorch implementation.

For FixMatch, we used the official Tensorflow code and an unofficial PyTorch code to reproduce the results. For MoCo, we use this PyContrast implementation.

To train the model, use the following command:

CUDA_VISIBLE_DEVICES=0 python run_train.py --task ${task} --init ${init} --alg ${alg} --unlabel ${unlabel} --num_iter ${num_iter} --warmup ${warmup} --lr ${lr} --wd ${wd} --batch_size ${batch_size} --exp_dir ${exp_dir} --MoCo ${MoCo} --alpha ${alpha} --kd_T ${kd_T} --trainval

For example, to train a supervised model initialized from a inat pre-trained model on semi-aves dataset with in-domain unlabeled data only, you will use:

CUDA_VISIBLE_DEVICES=0 python run_train.py --task semi_aves --init inat --alg supervised --unlabel in --num_iter 10000 --lr 1e-3 --wd 1e-4 --exp_dir semi_aves_supervised_in --MoCo false --trainval

Note that for experiments of Semi-Aves and Semi-Fungi in the paper, we combined the training and val set for training (use args --trainval).
For all the hyper-parameters, please see the following shell scripts:

  • exp_sup.sh for supervised training
  • exp_PL.sh for pseudo-labeling
  • exp_CPL.sh for curriculum pseudo-labeling
  • exp_MoCo.sh for MoCo + supervised training
  • exp_distill.sh for self-training and MoCo + self-training

Training and Evaluation (BMVC paper)

In our BMVC paper, we added the hierarchical supervision of coarse labels on top of semi-supervised learning.

To train the model, use the following command:

CUDA_VISIBLE_DEVICES=0 python run_train_hierarchy.py --task ${task} --init ${init} --alg ${alg} --unlabel ${unlabel} --num_iter ${num_iter} --warmup ${warmup} --lr ${lr} --wd ${wd} --batch_size ${batch_size} --exp_dir ${exp_dir} --MoCo ${MoCo} --alpha ${alpha} --kd_T ${kd_T} --level ${level}

The following are the arguments different from the above:

  • ${level}: choose from {genus, kingdom, phylum, class, order, family, species}
  • ${alg}: choose from {hierarchy, PL_hierarchy, distill_hierarchy}

For the settings and hyper-parameters, please see exp_hierarchy.sh.

Pre-Trained Models

We provide supervised training models, MoCo pre-trained models, as well as MoCo + supervised training models, for both Semi-Aves and Semi-Fungi datasets. Here are the links to download the model:

http://vis-www.cs.umass.edu/semi-inat-2021/ssl_evaluation/models/${method}/${dataset}_${initialization}_${unlabel}.pth.tar

  • ${method}: choose from {supervised, MoCo_init, MoCo_supervised}
  • ${dataset}: choose from {semi_aves, semi_fungi}
  • ${initialization}: choose from {scratch, imagenet, inat}
  • ${unlabel}: choose from {in, inout}

You need these models for self-training mothods. For example, the teacher model is initialized from model/supervised for self-training. For MoCo + self-training, the teacher model is initialized from model/MoCo_supervised, and the student model is initialized from model/MoCo_init.

We also provide the pre-trained ResNet-50 model of iNaturalist-18. This model was trained using this github code.

Related Challenges

Citation

@inproceedings{su2021realistic,
  author    = {Jong{-}Chyi Su and Zezhou Cheng and Subhransu Maji},
  title     = {A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2021}
}

@inproceedings{su2021taxonomic,
  author    = {Jong{-}Chyi Su and Subhransu Maji},
  title     = {Semi-Supervised Learning with Taxonomic Labels},
  booktitle = {British Machine Vision Conference (BMVC)},
  year      = {2021}
}

@article{su2021semi_iNat,
      title={The Semi-Supervised iNaturalist Challenge at the FGVC8 Workshop}, 
      author={Jong-Chyi Su and Subhransu Maji},
      year={2021},
      journal={arXiv preprint arXiv:2106.01364}
}

@article{su2021semi_aves,
      title={The Semi-Supervised iNaturalist-Aves Challenge at FGVC7 Workshop}, 
      author={Jong-Chyi Su and Subhransu Maji},
      year={2021},
      journal={arXiv preprint arXiv:2103.06937}
}
Metadata-Extractor - Metadata Extractor Script can be used to read in exif metadata

Metadata Extractor The exifextract script can be used to read in exif metadata f

1 Feb 16, 2022
TargetAllDomainObjects - A python wrapper to run a command on against all users/computers/DCs of a Windows Domain

TargetAllDomainObjects A python wrapper to run a command on against all users/co

Podalirius 19 Dec 13, 2022
DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations This repository contains the data, scripts and baseline co

Alexa 51 Dec 17, 2022
DiffStride: Learning strides in convolutional neural networks

DiffStride is a pooling layer with learnable strides. Unlike strided convolutions, average pooling or max-pooling that require cross-validating stride values at each layer, DiffStride can be initiali

Google Research 113 Dec 13, 2022
TorchOk - The toolkit for fast Deep Learning experiments in Computer Vision

TorchOk - The toolkit for fast Deep Learning experiments in Computer Vision

52 Dec 23, 2022
RL algorithm PPO and IRL algorithm AIRL written with Tensorflow.

RL algorithm PPO and IRL algorithm AIRL written with Tensorflow. They have a parallel sampling feature in order to increase computation speed (especially in high-performance computing (HPC)).

Fangjian Li 3 Dec 28, 2021
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
Learning Chinese Character style with conditional GAN

zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks Introduction Learning eastern asian language typefaces with GAN. zi2zi(字到字, me

Yuchen Tian 2.2k Jan 02, 2023
Subdivision-based Mesh Convolutional Networks

Subdivision-based Mesh Convolutional Networks The official implementation of SubdivNet in our paper, Subdivion-based Mesh Convolutional Networks Requi

Zheng-Ning Liu 181 Dec 28, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
Code for CVPR2019 paper《Unequal Training for Deep Face Recognition with Long Tailed Noisy Data》

Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data. This is the code of CVPR 2019 paper《Unequal Training for Deep Face Recognition

Zhong Yaoyao 68 Jan 07, 2023
Revisiting Weakly Supervised Pre-Training of Visual Perception Models

SWAG: Supervised Weakly from hashtAGs This repository contains SWAG models from the paper Revisiting Weakly Supervised Pre-Training of Visual Percepti

Meta Research 134 Jan 05, 2023
Notepy is a full-featured Notepad Python app

Notepy A full featured python text-editor Notable features Autocompletion for parenthesis and quote Auto identation Syntax highlighting Compile and ru

Mirko Rovere 11 Sep 28, 2022
Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021)

Pano-AVQA Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021) [Paper] [Poster] [Video] Getting Starte

Heeseung Yun 9 Dec 23, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Active Vision Laboratory 411 Dec 26, 2022
An end-to-end PyTorch framework for image and video classification

What's New: March 2021: Added RegNetZ models November 2020: Vision Transformers now available, with training recipes! 2020-11-20: Classy Vision v0.5 R

Facebook Research 1.5k Dec 31, 2022
JugLab 33 Dec 30, 2022
VisionKG: Vision Knowledge Graph

VisionKG: Vision Knowledge Graph Official Repository of VisionKG by Anh Le-Tuan, Trung-Kien Tran, Manh Nguyen-Duc, Jicheng Yuan, Manfred Hauswirth and

Continuous Query Evaluation over Linked Stream (CQELS) 9 Jun 23, 2022
Rotation Robust Descriptors

RoRD Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching Project Page | Paper link Evaluation and Datasets MMA : Training on

Udit Singh Parihar 25 Nov 15, 2022
A system for quickly generating training data with weak supervision

Programmatically Build and Manage Training Data Announcement The Snorkel team is now focusing their efforts on Snorkel Flow, an end-to-end AI applicat

Snorkel Team 5.4k Jan 02, 2023