SelfAugment extends MoCo to include automatic unsupervised augmentation selection.

Overview

SelfAugment

Paper

@misc{reed2020selfaugment,
      title={SelfAugment: Automatic Augmentation Policies for Self-Supervised Learning}, 
      author={Colorado Reed and Sean Metzger and Aravind Srinivas and Trevor Darrell and Kurt Keutzer},
      year={2020},
      eprint={2009.07724},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

SelfAugment extends MoCo to include automatic unsupervised augmentation selection. In addition, we've included the ability to pretrain on several new datasets and included a wandb integration.

Using your own dataset.

To interface your own dataset, make sure that you carefully check the three main scripts to incorporate your dataset:

  1. main_moco.py
  2. main_lincls.py
  3. faa.py

Some things to check:

  1. Ensure that the sizing for your dataset is right. If your images are 32x32 (e.g. CIFAR10) - you should ensure that you are using the CIFAR10 style model, which uses a 3x3 input conv, and resizes images to be 28x28 instead of 224x224 (e.g. for ImageNet). This can make a big difference!
  2. If you want selfaugment to run quickly, consider using a small subset of your full dataset. For example, for ImageNet, we only use a small subset of the data - 50,000 random images. This may mean that you need to run unsupervised pretraining for longer than you usually do. We usually scale the number of epochs MoCov2 runs so that the number of total iterations is the same, or a bit smaller, for the subset and the full dataset.

Base augmentation.

If you want to find the base augmentation, then use slm_utils/submit_single_augmentations.py

This will result in 16 models, each with the results of self supervised training using ONLY the augmentation provided. slm_utils/submit_single_augmentations is currently using imagenet, so it uses a subset for this part.

Then you will need to train rotation classifiers for each model. this can be done using main_lincls.py

Train 5 Folds of MoCov2 on the folds of your data.

To get started, train 5 moco models using only the base augmentation. To do this, you can run python slm_utils/submit_moco_folds.py.

Run SelfAug

Now, you must run SelfAug on your dataset. Note - some changes in dataloaders may be necessary depending on your dataset.

@Colorado, working on making this process cleaner.

For now, you will need to go into faa_search_single_aug_minmax_w.py, and edit the config there. I will change this to argparse here soon. The most critical part of this is entering your checkpoint names in order of each fold under config.checkpoints.

Loss can be rotation, icl, icl_and_rotation. If you are doing icl_and_rotation, then you will need to normalize the loss_weights in loss_weight dict so that each loss is 1/(avg loss across k-folds) for each type of loss, I would just use the loss that was in wandb (rot train loss, and ICL loss from pretraining). Finally, you are trying to maximize negative loss with Ray, so a negative weighting in the loss weights means that the loss with that weight will be maximized.

Retrain using new augmentations found by SelfAug.

Just make sure to change the augmentation path to the pickle file with your new augmentations in load_policies function in get_faa_transforms.py Then, submit the job using slm_utils/submit_faa_moco.py

Owner
Colorado Reed
CS PhD student at Berkeley
Colorado Reed
TeST: Temporal-Stable Thresholding for Semi-supervised Learning

TeST: Temporal-Stable Thresholding for Semi-supervised Learning TeST Illustration Semi-supervised learning (SSL) offers an effective method for large-

Xiong Weiyu 1 Jul 14, 2022
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
The source code for 'Noisy-Labeled NER with Confidence Estimation' accepted by NAACL 2021

Kun Liu*, Yao Fu*, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, Sheng Gao. Noisy-Labeled NER with Confidence Estimation. NAACL 2021. [arxiv]

30 Nov 12, 2022
Multi-task head pose estimation in-the-wild

Multi-task head pose estimation in-the-wild We provide C++ code in order to replicate the head-pose experiments in our paper https://ieeexplore.ieee.o

Roberto Valle 26 Oct 06, 2022
Jax/Flax implementation of Variational-DiffWave.

jax-variational-diffwave Jax/Flax implementation of Variational-DiffWave. (Zhifeng Kong et al., 2020, Diederik P. Kingma et al., 2021.) DiffWave with

YoungJoong Kim 37 Dec 16, 2022
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (चित्र) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Aniket Maurya 210 Dec 21, 2022
Mixed Transformer UNet for Medical Image Segmentation

MT-UNet Update 2021/11/19 Thank you for your interest in our work. We have uploaded the code of our MTUNet to help peers conduct further research on i

dotman 92 Dec 25, 2022
A curated (most recent) list of resources for Learning with Noisy Labels

A curated (most recent) list of resources for Learning with Noisy Labels

Jiaheng Wei 321 Jan 09, 2023
Notebooks, slides and dataset of the CorrelAid Machine Learning Winter School

CorrelAid Machine Learning Winter School Welcome to the CorrelAid ML Winter School! Task The problem we want to solve is to classify trees in Roosevel

CorrelAid 12 Nov 23, 2022
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

118 Dec 26, 2022
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
This repo. is an implementation of ACFFNet, which is accepted for in Image and Vision Computing.

Attention-Guided-Contextual-Feature-Fusion-Network-for-Salient-Object-Detection This repo. is an implementation of ACFFNet, which is accepted for in I

5 Nov 21, 2022
PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

1 May 31, 2022
Python implementation of Bayesian optimization over permutation spaces.

Bayesian Optimization over Permutation Spaces This repository contains the source code and the resources related to the paper "Bayesian Optimization o

Aryan Deshwal 9 Dec 23, 2022
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

63 Dec 16, 2022
GNN-based Recommendation Benchmark

GRecX A Fair Benchmark for GNN-based Recommendation Homepage and Documentation Homepage: Documentation: Paper: GRecX: An Efficient and Unified Benchma

73 Oct 17, 2022
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
The code repository for "RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection" (ACM MM'21)

RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection (ACM MM'21) By Zhuofan Zong, Qianggang Cao, Biao Leng Introduction F

TempleX 9 Jul 30, 2022
Nightmare-Writeup - Writeup for the Nightmare CTF Challenge from 2022 DiceCTF

Nightmare: One Byte to ROP // Alternate Solution TLDR: One byte write, no leak.

1 Feb 17, 2022
VGG16 model-based classification project about brain tumor detection.

Brain-Tumor-Classification-with-MRI VGG16 model-based classification project about brain tumor detection. First, you can check what people are doing o

Atakan Erdoğan 2 Mar 21, 2022