Semi-Supervised Learning for Fine-Grained Classification

Overview

Semi-Supervised Learning for Fine-Grained Classification

This repo contains the code of:

  • A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification, Jong-Chyi Su, Zezhou Cheng, and Subhransu Maji, CVPR 2021. [paper, poster, slides]
  • Semi-Supervised Learning with Taxonomic Labels, Jong-Chyi Su and Subhransu Maji, BMVC 2021. [paper, slides]

Preparing Datasets and Splits

We used the following datasets in the paper:

In addition the repository contains a new Semi-iNat dataset corresponding to the FGVC8 semi-supervised challenge:

  • Semi-iNat: This is a new dataset for the Semi-iNat Challenge at FGVC8 workshop at CVPR 2021. Different from Semi-Aves, Semi-iNat has more species from different kingdoms, and does not include in or out-of-domain label. For more details please see the challenge website.

The splits of each of these datasets can be found under data/${dataset}/${split}.txt corresponding to:

  • l_train -- labeled in-domain data
  • u_train_in -- unlabeled in-domain data
  • u_train_out -- unlabeled out-of-domain data
  • u_train (combines u_train_in and u_train_out)
  • val -- validation set
  • l_train_val (combines l_train and val)
  • test -- test set

Each line in the text file has a filename and the corresponding class label.

Please download the datasets from the corresponding websites. For Semi-Aves, put the data under data/semi_aves. FFor Semi-Fungi and Semi-CUB, download the images and put them under data/semi_fungi/images and data/cub/images.

Note 1: For the experiments on Semi-Fungi reported in the paper, the images are resized to a maximum of 300px for each side.
Note 2: We reported the results of another split of Semi-Aves in the appendix (for cross-validation), but we do not release the labels because it will leak the labels for unlabeled data.
Note 3: We also provide the species names of Semi-Aves under data/semi_aves_species_names.txt, and the species names of Semi-Fungi. The names were not shared in the competetion.

Training and Evaluation (CVPR paper)

We provide the code for all the methods included in the paper, except for FixMatch and MoCo. This includes methods of supervised training, self-training, PL, and curriculum PL. This code is developed based on this PyTorch implementation.

For FixMatch, we used the official Tensorflow code and an unofficial PyTorch code to reproduce the results. For MoCo, we use this PyContrast implementation.

To train the model, use the following command:

CUDA_VISIBLE_DEVICES=0 python run_train.py --task ${task} --init ${init} --alg ${alg} --unlabel ${unlabel} --num_iter ${num_iter} --warmup ${warmup} --lr ${lr} --wd ${wd} --batch_size ${batch_size} --exp_dir ${exp_dir} --MoCo ${MoCo} --alpha ${alpha} --kd_T ${kd_T} --trainval

For example, to train a supervised model initialized from a inat pre-trained model on semi-aves dataset with in-domain unlabeled data only, you will use:

CUDA_VISIBLE_DEVICES=0 python run_train.py --task semi_aves --init inat --alg supervised --unlabel in --num_iter 10000 --lr 1e-3 --wd 1e-4 --exp_dir semi_aves_supervised_in --MoCo false --trainval

Note that for experiments of Semi-Aves and Semi-Fungi in the paper, we combined the training and val set for training (use args --trainval).
For all the hyper-parameters, please see the following shell scripts:

  • exp_sup.sh for supervised training
  • exp_PL.sh for pseudo-labeling
  • exp_CPL.sh for curriculum pseudo-labeling
  • exp_MoCo.sh for MoCo + supervised training
  • exp_distill.sh for self-training and MoCo + self-training

Training and Evaluation (BMVC paper)

In our BMVC paper, we added the hierarchical supervision of coarse labels on top of semi-supervised learning.

To train the model, use the following command:

CUDA_VISIBLE_DEVICES=0 python run_train_hierarchy.py --task ${task} --init ${init} --alg ${alg} --unlabel ${unlabel} --num_iter ${num_iter} --warmup ${warmup} --lr ${lr} --wd ${wd} --batch_size ${batch_size} --exp_dir ${exp_dir} --MoCo ${MoCo} --alpha ${alpha} --kd_T ${kd_T} --level ${level}

The following are the arguments different from the above:

  • ${level}: choose from {genus, kingdom, phylum, class, order, family, species}
  • ${alg}: choose from {hierarchy, PL_hierarchy, distill_hierarchy}

For the settings and hyper-parameters, please see exp_hierarchy.sh.

Pre-Trained Models

We provide supervised training models, MoCo pre-trained models, as well as MoCo + supervised training models, for both Semi-Aves and Semi-Fungi datasets. Here are the links to download the model:

http://vis-www.cs.umass.edu/semi-inat-2021/ssl_evaluation/models/${method}/${dataset}_${initialization}_${unlabel}.pth.tar

  • ${method}: choose from {supervised, MoCo_init, MoCo_supervised}
  • ${dataset}: choose from {semi_aves, semi_fungi}
  • ${initialization}: choose from {scratch, imagenet, inat}
  • ${unlabel}: choose from {in, inout}

You need these models for self-training mothods. For example, the teacher model is initialized from model/supervised for self-training. For MoCo + self-training, the teacher model is initialized from model/MoCo_supervised, and the student model is initialized from model/MoCo_init.

We also provide the pre-trained ResNet-50 model of iNaturalist-18. This model was trained using this github code.

Related Challenges

Citation

@inproceedings{su2021realistic,
  author    = {Jong{-}Chyi Su and Zezhou Cheng and Subhransu Maji},
  title     = {A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2021}
}

@inproceedings{su2021taxonomic,
  author    = {Jong{-}Chyi Su and Subhransu Maji},
  title     = {Semi-Supervised Learning with Taxonomic Labels},
  booktitle = {British Machine Vision Conference (BMVC)},
  year      = {2021}
}

@article{su2021semi_iNat,
      title={The Semi-Supervised iNaturalist Challenge at the FGVC8 Workshop}, 
      author={Jong-Chyi Su and Subhransu Maji},
      year={2021},
      journal={arXiv preprint arXiv:2106.01364}
}

@article{su2021semi_aves,
      title={The Semi-Supervised iNaturalist-Aves Challenge at FGVC7 Workshop}, 
      author={Jong-Chyi Su and Subhransu Maji},
      year={2021},
      journal={arXiv preprint arXiv:2103.06937}
}
House3D: A Rich and Realistic 3D Environment

House3D: A Rich and Realistic 3D Environment Yi Wu, Yuxin Wu, Georgia Gkioxari and Yuandong Tian House3D is a virtual 3D environment which consists of

Meta Research 1.1k Dec 14, 2022
以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的斗地主ai

ddz-ai 介绍 斗地主是一种扑克游戏。游戏最少由3个玩家进行,用一副54张牌(连鬼牌),其中一方为地主,其余两家为另一方,双方对战,先出完牌的一方获胜。 ddz-ai以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的系统,使其经过大量训练后,能在实际游戏中获

freefuiiismyname 88 May 15, 2022
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022
✨✨✨An awesome open source toolbox for stereo matching.

OpenStereo This is an awesome open source toolbox for stereo matching. Supported Methods: BM SGM(T-PAMI'07) GCNet(ICCV'17) PSMNet(CVPR'18) StereoNet(E

Wang Qingyu 6 Nov 04, 2022
Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud

Google Cloud Vertex AI Samples Welcome to the Google Cloud Vertex AI sample repository. Overview The repository contains notebooks and community conte

Google Cloud Platform 560 Dec 31, 2022
KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control

KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control Tomas Jakab, Richard Tucker, Ameesh Makadia, Jiajun Wu, Noah Snavely, Angjoo Ka

Tomas Jakab 87 Nov 30, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
Computing Shapley values using VAEAC

Shapley values and the VAEAC method In this GitHub repository, we present the implementation of the VAEAC approach from our paper "Using Shapley Value

3 Nov 23, 2022
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
A data-driven maritime port simulator

PySeidon - A Data-Driven Maritime Port Simulator 🌊 Extendable and modular software for maritime port simulation. This software uses entity-component

6 Apr 10, 2022
TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition

TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition Xue, Wenyuan, et al. "TGRNet: A Table Graph Reconstruction Network for Ta

Wenyuan 68 Jan 04, 2023
wmctrl ported to Python Ctypes

work in progress wmctrl is a command that can be used to interact with an X Window manager that is compatible with the EWMH/NetWM specification. wmctr

Iyad Ahmed 22 Dec 31, 2022
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
Prototype-based Incremental Few-Shot Semantic Segmentation

Prototype-based Incremental Few-Shot Semantic Segmentation Fabio Cermelli, Massimiliano Mancini, Yongqin Xian, Zeynep Akata, Barbara Caputo -- BMVC 20

Fabio Cermelli 21 Dec 29, 2022
A lightweight library to compare different PyTorch implementations of the same network architecture.

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compar

Arjun Krishnakumar 5 Jan 02, 2023
Improving 3D Object Detection with Channel-wise Transformer

"Improving 3D Object Detection with Channel-wise Transformer" Thanks for the OpenPCDet, this implementation of the CT3D is mainly based on the pcdet v

Hualian Sheng 107 Dec 20, 2022
Face detection using deep learning.

Face Detection Docker Solution Using Faster R-CNN Dockerface is a deep learning face detector. It deploys a trained Faster R-CNN network on Caffe thro

Nataniel Ruiz 181 Dec 19, 2022
atmaCup #11 の Public 4th / Pricvate 5th Solution のリポジトリです。

#11 atmaCup 2021-07-09 ~ 2020-07-21 に行われた #11 [初心者歓迎! / 画像編] atmaCup のリポジトリです。結果は Public 4th / Private 5th でした。 フレームワークは PyTorch で、実装は pytorch-image-m

Tawara 12 Apr 07, 2022