Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data

Overview

1 Meta-FDMIxup

Repository for the paper :

Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data. (ACM MM 2021)

paper

News! the representation video loaded in 2021/10/06 in Bilibili

News! the representation video loaded in 2021/10/10 in Youtube

image

If you have any questions, feel free to contact me. My email is [email protected].

2 setup and datasets

2.1 setup

A anaconda envs is recommended:

conda create --name py36 python=3.6
conda activate py36
conda install pytorch torchvision -c pytorch
pip3 install scipy>=1.3.2
pip3 install tensorboardX>=1.4
pip3 install h5py>=2.9.0

Then, git clone our repo:

git clone https://github.com/lovelyqian/Meta-FDMixup
cd Meta-FDMixup

2.2 datasets

Totally five datasets inculding miniImagenet, CUB, Cars, Places, and Plantae are used.

  1. Following FWT-repo to download and setup all datasets. (It can be done quickly)

  2. Remember to modify your own dataset dir in the 'options.py'

  3. Under our new setting, we randomly select $num_{target}$ labeled images from the target base set to form the auxiliary set. The splits we used are provided in 'Sources/'.

3 pretrained ckps

We provide several pretrained ckps.

You can download and put them in the 'output/pretrained_ckps/'

3.1 pretrained model trained on the miniImagenet

3.2 full model meta-trained on the target datasets

Since our method is target-set specific, we have to train a model for each target dataset.

Notably, as we stated in the paper, we use the last checkpoint for target dataset, while the best model on the validation set of miniImagenet is used for miniImagenet. Here, we provide the model of 'miniImagenet|CUB' as an example.

4 usage

4.1 network pretraining

python3 network_train.py --stage pretrain  --name pretrain-model --train_aug 

If you have downloaded our pretrained_model_399.tar, you can just skip this step.

4.2 pretrained model testing

# test source dataset (miniImagenet)
python network_test.py --ckp_path output/checkpoints/pretrain-model/399.tar --stage pretrain --dataset miniImagenet --n_shot 5 

# test target dataset e.g. cub
python network_test.py --ckp_path output/checkpoints/pretrain-model/399.tar --stage pretrain --dataset cub --n_shot 5

you can test our pretrained_model_399.tar in the same way:

# test source dataset (miniImagenet)
python network_test.py --ckp_path output/pretrained_ckps/pretrained_model_399.tar --stage pretrain --dataset miniImagenet --n_shot 5 


# test target dataset e.g. cub
python network_test.py --ckp_path output/pretrained_ckps/pretrained_model_399.tar --stage pretrain --dataset cub --n_shot 5

4.3 network meta-training

# traget set: CUB
python3 network_train.py --stage metatrain --name metatrain-model-5shot-cub --train_aug --warmup output/checkpoints/pretrain-model/399.tar --target_set cub --n_shot 5

# target set: Cars
python3 network_train.py --stage metatrain --name metatrain-model-5shot-cars --train_aug --warmup output/checkpoints/pretrain-model/399.tar --target_set cars --n_shot 5

# target set: Places
python3 network_train.py --stage metatrain --name metatrain-model-5shot-places --train_aug --warmup output/checkpoints/pretrain-model/399.tar --target_set places --n_shot 5

# target set: Plantae
python3 network_train.py --stage metatrain --name metatrain-model-5shot-plantae --train_aug --warmup output/checkpoints/pretrain-model/399.tar --target_set plantae --n_shot 5

Also, you can use our pretrained_model_399.tar for warmup:

# traget set: CUB
python3 network_train.py --stage metatrain --name metatrain-model-5shot-cub --train_aug --warmup output/pretrained_ckps/pretrained_model_399.tar --target_set cub --n_shot 5

4.4 network testing

To test our provided full models:

# test target dataset (CUB)
python network_test.py --ckp_path output/pretrained_ckps/full_model_5shot_target_cub_399.tar --stage metatrain --dataset cub --n_shot 5 

# test target dataset (Cars)
python network_test.py --ckp_path output/pretrained_ckps/full_model_5shot_target_cars_399.tar --stage metatrain --dataset cars --n_shot 5 

# test target dataset (Places)
python network_test.py --ckp_path output/pretrained_ckps/full_model_5shot_target_places_399.tar --stage metatrain --dataset places --n_shot 5 

# test target dataset (Plantae)
python network_test.py --ckp_path output/pretrained_ckps/full_model_5shot_target_places_399.tar --stage metatrain --dataset plantae --n_shot 5 


# test source dataset (miniImagenet|CUB)
python network_test.py --ckp_path output/pretrained_ckps/full_model_5shot_target_cub_best_eval.tar --stage metatrain --dataset miniImagenet --n_shot 5 

To test your models, just modify the 'ckp-path'.

5 citing

If you find our paper or this code useful for your research, please cite us:

@article{fu2021meta,
  title={Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data},
  author={Fu, Yuqian and Fu, Yanwei and Jiang, Yu-Gang},
  journal={arXiv preprint arXiv:2107.11978},
  year={2021}
}

6 Note

Notably, our code is built upon the implementation of FWT-repo.

Owner
Fu Yuqian
Fu Yuqian
Adversarial Self-Defense for Cycle-Consistent GANs

Adversarial Self-Defense for Cycle-Consistent GANs This is the official implementation of the CycleGAN robust to self-adversarial attacks used in pape

Dina Bashkirova 10 Oct 10, 2022
Submanifold sparse convolutional networks

Submanifold Sparse Convolutional Networks This is the PyTorch library for training Submanifold Sparse Convolutional Networks. Spatial sparsity This li

Facebook Research 1.8k Jan 06, 2023
Deep Compression for Dense Point Cloud Maps.

DEPOCO This repository implements the algorithms described in our paper Deep Compression for Dense Point Cloud Maps. How to get started (using Docker)

Photogrammetry & Robotics Bonn 67 Dec 06, 2022
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022
Codes for [NeurIPS'21] You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership.

You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership Codes for [NeurIPS'21] You are caught stealing my winni

VITA 8 Nov 01, 2022
Adversarial-autoencoders - Tensorflow implementation of Adversarial Autoencoders

Adversarial Autoencoders (AAE) Tensorflow implementation of Adversarial Autoencoders (ICLR 2016) Similar to variational autoencoder (VAE), AAE imposes

Qian Ge 236 Nov 13, 2022
A Survey on Deep Learning Technique for Video Segmentation

A Survey on Deep Learning Technique for Video Segmentation A Survey on Deep Learning Technique for Video Segmentation Wenguan Wang, Tianfei Zhou, Fati

Tianfei Zhou 112 Dec 12, 2022
An efficient and easy-to-use deep learning model compression framework

TinyNeuralNetwork 简体中文 TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework, which contains features like neura

Alibaba 441 Dec 25, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
EfficientMPC - Efficient Model Predictive Control Implementation

efficientMPC Efficient Model Predictive Control Implementation The original algo

Vin 8 Dec 04, 2022
Signals-backend - A suite of card games written in Python

Card game A suite of card games written in the Python language. Features coming

1 Feb 15, 2022
PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-supervised ViT.

MAE for Self-supervised ViT Introduction This is an unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-sup

36 Oct 30, 2022
performing moving objects segmentation using image processing techniques with opencv and numpy

Moving Objects Segmentation On this project I tried to perform moving objects segmentation using background subtraction technique. the introduced meth

Mohamed Magdy 15 Dec 12, 2022
Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation

Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation This repository contains the Pytorch implementation of the proposed

Devavrat Tomar 19 Nov 10, 2022
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
python library for invisible image watermark (blind image watermark)

invisible-watermark invisible-watermark is a python library and command line tool for creating invisible watermark over image.(aka. blink image waterm

Shield Mountain 572 Jan 07, 2023
Official PyTorch Implementation for "Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes"

PVDNet: Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes This repository contains the official PyTorch implementatio

Junyong Lee 98 Nov 06, 2022
Code and Resources for the Transformer Encoder Reasoning Network (TERN)

Transformer Encoder Reasoning Network Code for the cross-modal visual-linguistic retrieval method from "Transformer Reasoning Network for Image-Text M

Nicola Messina 53 Dec 30, 2022
Video Matting Refinement For Python

Video-matting refinement Library (use pip to install) scikit-image numpy av matplotlib Run Static background python path_to_video.mp4 Moving backgroun

3 Jan 11, 2022
Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

BERT Got a Date: Introducing Transformers to Temporal Tagging Satya Almasian*, Dennis Aumiller*, and Michael Gertz Heidelberg University Contact us vi

54 Dec 04, 2022