Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Overview

Adversarial Long-Tail

This repository contains the PyTorch implementation of the paper:

Adversarial Robustness under Long-Tailed Distribution, CVPR 2021 (Oral)

Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, Dahua Lin

Real-world data usually exhibits a long-tailed distribution, while previous works on adversarial robustness mainly focus on balanced datasets. To push adversarial robustness towards more realistic scenarios, in this work, we investigate the adversarial vulnerability as well as defense under long-tailed distributions. We perform a systematic study on existing Long-Tailed recognition (LT) methods in conjunction with the Adversarial Training framework (AT) and obtain several valuable observations. We then propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant classifier and data re-balancing via both margin engineering at the training stage and boundary adjustment during inference.

This repository includes:

  • Code for the LT methods applied with AT framework in our study.
  • Code and pre-trained models for our method.

Environment

Datasets

We use the CIFAR-10-LT and CIFAR-100-LT datasets. The data will be automatically downloaded and converted.

Usage

Baseline

To train and evaluate a baseline model, run the following commands:

# Vanilla FC for CIFAR-10-LT
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat.yaml
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat.yaml -a ALL

# Vanilla FC for CIFAR-100-LT
python train.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat.yaml
python test.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat.yaml -a ALL

Here -a ALL denotes that we evaluate five attacks including FGSM, PGD, MIM, CW, and AutoAttack.

Long-tailed recognition methods with adversarial training framework

We provide scripts for the long-tailed recognition methods applied with adversarial training framework as reported in our study. We mainly provide config files for CIFAR-10-LT. For CIFAR-100-LT, simply set imbalance_ratio=0.1, dataset=CIFAR100, and num_classes=100 in the config file, and don't forget to change the model_dir (working directory to save the log files and checkpoints) and model_path (checkpoint to evaluate by test.py).

Methods applied at training time.

Methods applied at training stage include class-aware re-sampling and different kinds of cost-sensitive learning.

Train the models with the corresponding config files:

# Vanilla Cos
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_cos.yaml

# Class-aware margin
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_LDAM.yaml

# Cosine with margin
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_cos_HE.yaml

# Class-aware temperature
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_CDT.yaml

# Class-aware bias
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_logitadjust.yaml

# Hard-exmaple mining
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_focal.yaml

# Re-sampling
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_rs-whole.yaml

# Re-weighting (based on effective number of samples)
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_CB.yaml

Evaluate the models with the same config files as training time:

python test.py <the-config-file-used-for-training>.yaml -a ALL

Methods applied via fine-tuning.

Fine-tuning based methods propose to re-train or fine-tune the classifier via data re-balancing techniques with the backbone frozen.

Train a baseline model first, and then set the load_model in the following config files as <folder-name-of-the-baseline-model>/epoch80.pt (path to the last-epoch checkpoint; we have already aligned the settings of directories in this repo). Run fine-tuning by:

# One-epoch re-sampling
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_rs-fine.yaml

# One-epoch re-weighting
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_rw-fine.yaml 

# Learnable classifier scale
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_lws.yaml 

Evaluate the models with the same config files as training time:

python test.py <the-config-file-used-for-training>.yaml -a ALL

Methods applied at inference time.

Methods applied at the inference stage based on a vanilla trained model would usually conduct a different forwarding process from the training stage to address shifted data distributions from train-set to test-set.

Similarly, train a baseline model first, and this time set the model_path in the following config files as <folder-name-of-the-baseline-model>/epoch80.pt (path to the last-epoch checkpoint; we have already aligned the settings of directories in this repo). Run evaluation with a certain inference-time strategy by:

# Classifier re-scaling
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_post_CDT.yaml -a ALL

# Classifier normalization
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_post_norm.yaml -a ALL

# Class-aware bias
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_post_logitadjust.yaml -a ALL

Sometimes a baseline model is not applicable, since a cosine classifier is used with some statistics recorded during training. For example, to apply the method below, train the model by:

# Feature disentangling
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_TDESim.yaml 

Change the posthoc setting in the config file as True, and evaluate the model by:

python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_TDESim.yaml -a ALL

Attention: methods that involve loss temperatures or classifier scaling operations could be at the risk of producing unexpectedly higher robustness accuracy for PGD and MIM attacks, which is NOT reliable as analyzed in Sec.3.3 of our paper. This phenomenon sometimes could be observed at validation time during training. As a result, for a more reliable evaluation, it is essential to keep a similar level of logit scales during both training and inference stage.

Our method

The config files used for training and inference stage could be different, denoted by <config-prefix>_train.yaml and <config-prefix>_eval.yaml, respectively.

Training stage

Train the models by running:

# CIFAR-10-LT
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_N_train.yaml
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_R_train.yaml

# CIFAR-100-LT
python train.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_N_train.yaml
python train.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_R_train.yaml

Attention: notice that by the end of the training stage, the evaluation results with the original training config file would miss the re-balancing strategy applied at inference state, thus we should change to the evaluation config file to complete the process.

Inference stage

Evaluate the models by running:

# CIFAR-10-LT
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_N_eval.yaml -a ALL
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_R_eval.yaml -a ALL

# CIFAR-100-LT
python test.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_N_eval.yaml -a ALL
python test.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_R_eval.yaml -a ALL

Pre-trained models

We provide the pre-trained models for our methods above. Download and extract them to the ./checkpoints directory, and produce the results with eval.yaml in the corresponding folders by running:

python test.py checkpoints/<folder-name-of-the-pretrained-model>/eval.yaml -a ALL

License and Citation

If you find our code or paper useful, please cite our paper:

@inproceedings{wu2021advlt,
 author =  {Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, and Dahua Lin},
 title = {Adversarial Robustness under Long-Tailed Distribution},
 booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2021}
 }

Acknowledgement

We thank the authors for the following repositories for code reference: TRADES, AutoAttack, ADT, Class-Balanced Loss, LDAM-DRW, OLTR, AT-HE, Classifier-Balancing, mma_training, TDE, etc.

Contact

Please contact @wutong16 for questions, comments and reporting bugs.

Owner
Tong WU
Tong WU
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 05, 2023
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.

Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX. The repository combines a class agnostic object localizer to first detect the objects in the image

Ibai Gorordo 24 Nov 14, 2022
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

This repository contains the introduction to the collected VRViewportPose dataset and the code for the IEEE INFOCOM 2022 paper: "VR Viewport Pose Model for Quantifying and Exploiting Frame Correlatio

0 Aug 10, 2022
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".

Codebase for learning control flow in transformers The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformer

Csordás Róbert 24 Oct 15, 2022
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situat

Rosefintech 11 Aug 23, 2021
This is a official repository of SimViT.

SimViT This is a official repository of SimViT. We will open our models and codes about object detection and semantic segmentation soon. Our code refe

ligang 57 Dec 15, 2022
Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Vector Neurons: A General Framework for SO(3)-Equivariant Networks Created by Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacc

Congyue Deng 332 Dec 29, 2022
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 65 Dec 01, 2022
Little tool in python to watch anime from the terminal (the better way to watch anime)

ani-cli Script working again :), thanks to the fork by Dink4n for the alternative approach to by pass the captcha on gogoanime A cli to browse and wat

Harshith 4.5k Dec 31, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 02, 2023
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 04, 2023
Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)

Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019) We propose Disentangled Audio-Visual System (DAVS) to ad

Hang_Zhou 750 Dec 23, 2022
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

ASSL This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR)

Huan Wang 47 Nov 28, 2022
113 Nov 28, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
A PyTorch Implementation of Neural IMage Assessment

NIMA: Neural IMage Assessment This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment (accepted at IEEE Transactions on Image Proc

yunxiaos 418 Dec 29, 2022
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022