An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

Overview

Automatic Augmentation Zoo

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

We will post updates regularly so you can star 🌟 or watch 👓 this repository for the latest.

Introduction

This repository provides the official implementations of OHL and AWS, and will also integrate some other popular auto-aug methods (like Auto Augment, Fast AutoAugment and Adversarial autoaugment) in pure PyTorch. We use torch.distributed to conduct the distributed training. The model checkpoints will be upload to GoogleDrive or OneDrive soon.

Dependencies

It would be recommended to conduct experiments under:

  • python 3.6.3
  • pytorch 1.1.0, torchvision 0.2.1

All the dependencies are listed in requirements.txt. You may use commands like pip install -r requirements.txt to install them.

Running

  1. Create the directory for your experiment.
cd /path/to/this/repo
mkdir -p exp/aws_search1
  1. Copy configurations into your workspace.
cp scripts/search.sh configs/aws.yaml exp/aws_search1
cd exp/aws_search1
  1. Start searching
# sh ./search.sh  
sh ./search.sh Test 8

An instance of yaml:

version: 0.1.0

dist:
    type: torch
    kwargs:
        node0_addr: auto
        node0_port: auto
        mp_start_method: fork   # fork or spawn; spawn would be too slow for Dalaloader

pipeline:
    type: aws
    common_kwargs:
        dist_training: &dist_training False
#        job_name:         [will be assigned in runtime]
#        exp_root:         [will be assigned in runtime]
#        meta_tb_lg_root:  [will be assigned in runtime]

        data:
            type: cifar100               # case-insensitive (will be converted to lower case in runtime)
#            dataset_root: /path/to/dataset/root   # default: ~/datasets/[type]
            train_set_size: 40000
            val_set_size: 10000
            batch_size: 256
            dist_training: *dist_training
            num_workers: 3
            cutout: True
            cutlen: 16

        model_grad_clip: 3.0
        model:
            type: WRN
            kwargs:
#                num_classes: [will be assigned in runtime]
                bn_mom: 0.5

        agent:
            type: ppo           # ppo or REINFORCE
            kwargs:
                initial_baseline_ratio: 0
                baseline_mom: 0.9
                clip_epsilon: 0.2
                max_training_times: 5
                early_stopping_kl: 0.002
                entropy_bonus: 0
                op_cfg:
                    type: Adam         # any type in torch.optim
                    kwargs:
#                        lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                        betas: !!python/tuple [0.5, 0.999]
                        weight_decay: 0
                sc_cfg:
                    type: Constant
                    kwargs:
                        base_lr_divisor: 8      # base_lr = warmup_lr / base_lr_divisor
                        warmup_lr: 0.1          # lr at the end of warming up
                        warmup_iters: 10      # warmup_epochs = epochs / warmup_divisor
                        iters: &finetune_lp 350
        
        criterion:
            type: LSCE
            kwargs:
                smooth_ratio: 0.05


    special_kwargs:
        pretrained_ckpt_path: ~ # /path/to/pretrained_ckpt.pth.tar
        pretrain_ep: &pretrain_ep 200
        pretrain_op: &sgd
            type: SGD       # any type in torch.optim
            kwargs:
#                lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                nesterov: True
                momentum: 0.9
                weight_decay: 0.0001
        pretrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.2          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *pretrain_ep
                min_lr: &finetune_lr 0.001

        finetuned_ckpt_path: ~  # /path/to/finetuned_ckpt.pth.tar
        finetune_lp: *finetune_lp
        finetune_ep: &finetune_ep 10
        rewarded_ep: 2
        finetune_op: *sgd
        finetune_sc:
            type: Constant
            kwargs:
                base_lr: *finetune_lr
                warmup_lr: *finetune_lr
                warmup_iters: 0
                epochs: *finetune_ep

        retrain_ep: &retrain_ep 300
        retrain_op: *sgd
        retrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.4          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *retrain_ep
                min_lr: 0

Citation

If you're going to to use this code in your research, please consider citing our papers (OHL and AWS).

@inproceedings{lin2019online,
  title={Online Hyper-parameter Learning for Auto-Augmentation Strategy},
  author={Lin, Chen and Guo, Minghao and Li, Chuming and Yuan, Xin and Wu, Wei and Yan, Junjie and Lin, Dahua and Ouyang, Wanli},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={6579--6588},
  year={2019}
}

@article{tian2020improving,
  title={Improving Auto-Augment via Augmentation-Wise Weight Sharing},
  author={Tian, Keyu and Lin, Chen and Sun, Ming and Zhou, Luping and Yan, Junjie and Ouyang, Wanli},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Contact for Issues

References & Opensources

Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
DFM: A Performance Baseline for Deep Feature Matching

DFM: A Performance Baseline for Deep Feature Matching Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baselin

143 Jan 02, 2023
IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo

Seg_Uncertainty In this repo, we provide the code for the two papers, i.e., MRNet:Unsupervised Scene Adaptation with Memory Regularization in vivo, IJ

Zhedong Zheng 348 Jan 05, 2023
imbalanced-DL: Deep Imbalanced Learning in Python

imbalanced-DL: Deep Imbalanced Learning in Python Overview imbalanced-DL (imported as imbalanceddl) is a Python package designed to make deep imbalanc

NTUCSIE CLLab 19 Dec 28, 2022
Rendering color and depth images for ShapeNet models.

Color & Depth Renderer for ShapeNet This library includes the tools for rendering multi-view color and depth images of ShapeNet models. Physically bas

Yinyu Nie 41 Dec 19, 2022
My coursework for Machine Learning (2021 Spring) at National Taiwan University (NTU)

Machine Learning 2021 Machine Learning (NTU EE 5184, Spring 2021) Instructor: Hung-yi Lee Course Website : (https://speech.ee.ntu.edu.tw/~hylee/ml/202

100 Dec 26, 2022
Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic plasticity".

Impression-Learning-Camera-Ready Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic

2 Feb 09, 2022
A PaddlePaddle implementation of Time Interval Aware Self-Attentive Sequential Recommendation.

TiSASRec.paddle A PaddlePaddle implementation of Time Interval Aware Self-Attentive Sequential Recommendation. Introduction 论文:Time Interval Aware Sel

Paddorch 2 Nov 28, 2021
Convert game ISO and archives to CD CHD for emulation on Linux.

tochd Convert game ISO and archives to CD CHD for emulation. Author: Tuncay D. Source: https://github.com/thingsiplay/tochd Releases: https://github.c

Tuncay 20 Jan 02, 2023
Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image

NonCuboidRoom Paper Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiao

67 Dec 15, 2022
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning"

CSP_Deep_EEG This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning" {https://www

Seyed Mahdi Roostaiyan 2 Nov 08, 2022
The 2nd Version Of Slothybot

SlothyBot Go to this website: "https://bitly.com/SlothyBot" The 2nd Version Of Slothybot. The Bot Has Many Features, Such As: Moderation Commands; Kic

Slothy 0 Jun 01, 2022
Generative Adversarial Networks(GANs)

Generative Adversarial Networks(GANs) Vanilla GAN ClusterGAN Vanilla GAN Model Structure Final Generator Structure A MLP with 2 hidden layers of hidde

Zhenbang Feng 2 Nov 05, 2021
Do you like Quick, Draw? Well what if you could train/predict doodles drawn inside Streamlit? Also draws lines, circles and boxes over background images for annotation.

Streamlit - Drawable Canvas Streamlit component which provides a sketching canvas using Fabric.js. Features Draw freely, lines, circles, boxes and pol

Fanilo Andrianasolo 325 Dec 28, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Kento Nishi 22 Jul 07, 2022
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Tensorpack 6.2k Jan 01, 2023
Code for unmixing audio signals in four different stems "drums, bass, vocals, others". The code is adapted from "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Disclaimer This code is a based on "Jukebox: A Generative Model for Music" Paper We adju

Wadhah Zai El Amri 24 Dec 29, 2022
Companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsura et al.

META-RS This is the companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsu

Bosch Research 7 Dec 09, 2022