Certified Patch Robustness via Smoothed Vision Transformers

Overview

Certified Patch Robustness via Smoothed Vision Transformers

This repository contains the code for replicating the results of our paper:

Certified Patch Robustness via Smoothed Vision Transformers
Hadi Salman*, Saachi Jain*, Eric Wong*, Aleksander Madry

Paper
Blog post Part I.
Blog post Part II.

    @article{salman2021certified,
        title={Certified Patch Robustness via Smoothed Vision Transformers},
        author={Hadi Salman and Saachi Jain and Eric Wong and Aleksander Madry},
        booktitle={ArXiv preprint arXiv:2110.07719},
        year={2021}
    }

Getting started

Our code relies on the MadryLab public robustness library, which will be automatically installed when you follow the instructions below.

  1. Clone our repo: git clone https://github.mit.edu/hady/smoothed-vit

  2. Install dependencies:

    conda create -n smoothvit python=3.8
    conda activate smoothvit
    pip install -r requirements.txt
    

Full pipeline for building smoothed ViTs.

Now, we will walk you through the steps to create a smoothed ViT on the CIFAR-10 dataset. Similar steps can be followed for other datasets.

The entry point of our code is main.py (see the file for a full description of arguments).

First we will train the base classifier with ablations as data augmentation. Then we will apply derandomizd smoothing to build a smoothed version of the model which is certifiably robust.

Training the base classifier

The first step is to train the base classifier (here a ViT-Tiny) with ablations.

python src/main.py \
      --dataset cifar10 \
      --data /tmp \
      --arch deit_tiny_patch16_224 \
      --pytorch-pretrained \
      --out-dir OUTDIR \
      --exp-name demo \
      --epochs 30 \
      --lr 0.01 \
      --step-lr 10 \
      --batch-size 128 \
      --weight-decay 5e-4 \
      --adv-train 0 \
      --freeze-level -1 \
      --drop-tokens \
      --cifar-preprocess-type simple224 \
      --ablate-input \
      --ablation-type col \
      --ablation-size 4

Once training is done, the mode is saved in OUTDIR/demo/.

Certifying the smoothed classifier

Now we are ready to apply derandomized smoothing to obtain certificates for each datapoint against adversarial patches. To do so, simply run:

python src/main.py \
      --dataset cifar10 \
      --data /tmp \
      --arch deit_tiny_patch16_224 \
      --out-dir OUTDIR \
      --exp-name demo \
      --batch-size 128 \
      --adv-train 0 \
      --freeze-level -1 \
      --drop-tokens \
      --cifar-preprocess-type simple224 \
      --resume \
      --eval-only 1 \
      --certify \
      --certify-out-dir OUTDIR_CERT \
      --certify-mode col \
      --certify-ablation-size 4 \
      --certify-patch-size 5

This will calculate the standard and certified accuracies of the smoothed model. The results will be dumped into OUTDIR_CERT/demo/.

That's it! Now you can replicate all the results of our paper.

Download our ImageNet models

If you find our pretrained models useful, please consider citing our work.

Models trained with column ablations

Model Ablation Size = 19
ResNet-18 LINK
ResNet-50 LINK
WRN-101-2 LINK
ViT-T LINK
ViT-S LINK
ViT-B LINK

We have uploaded the most important models. If you need any other model (for the sweeps for example) please let us know and we are happy to provide!

Maintainers

Owner
Madry Lab
Towards a Principled Science of Deep Learning
Madry Lab
PyTorch code for training MM-DistillNet for multimodal knowledge distillation

There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge MM-DistillNet is a

51 Dec 20, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Final project for Intro to CS class.

Financial Analysis Web App https://share.streamlit.io/mayurk1/fin-web-app-final-project/webApp.py 1. Project Description This project is a technical a

Mayur Khanna 1 Dec 10, 2021
A simple, unofficial implementation of MAE using pytorch-lightning

Masked Autoencoders in PyTorch A simple, unofficial implementation of MAE (Masked Autoencoders are Scalable Vision Learners) using pytorch-lightning.

Connor Anderson 20 Dec 03, 2022
This is the pytorch re-implementation of the IterNorm

IterNorm-pytorch Pytorch reimplementation of the IterNorm methods, which is described in the following paper: Iterative Normalization: Beyond Standard

Lei Huang 32 Dec 27, 2022
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
GitHub repository for "Improving Video Generation for Multi-functional Applications"

Improving Video Generation for Multi-functional Applications GitHub repository for "Improving Video Generation for Multi-functional Applications" Pape

Bernhard Kratzwald 328 Dec 07, 2022
A spatial genome aligner for analyzing multiplexed DNA-FISH imaging data.

jie jie is a spatial genome aligner. This package parses true chromatin imaging signal from noise by aligning signals to a reference DNA polymer model

Bojing Jia 9 Sep 29, 2022
Marine debris detection with commercial satellite imagery and deep learning.

Marine debris detection with commercial satellite imagery and deep learning. Floating marine debris is a global pollution problem which threatens mari

Inter Agency Implementation and Advanced Concepts 56 Dec 16, 2022
A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images

BaSiC Matlab code accompanying A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images by Tingying Peng, Kurt Thorn, Timm Schr

Marr Lab 34 Dec 18, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
A universal framework for learning timestamp-level representations of time series

TS2Vec This repository contains the official implementation for the paper Learning Timestamp-Level Representations for Time Series with Hierarchical C

Zhihan Yue 284 Dec 30, 2022
Lightweight library to build and train neural networks in Theano

Lasagne Lasagne is a lightweight library to build and train neural networks in Theano. Its main features are: Supports feed-forward networks such as C

Lasagne 3.8k Dec 29, 2022
Finding Donors for CharityML

Finding-Donors-for-CharityML - Investigated factors that affect the likelihood of charity donations being made based on real census data.

Moamen Abdelkawy 1 Dec 30, 2021
The aim of the game, as in the original one, is to find a specific image from a group of different images of a person's face

GUESS WHO Main Links: [Github] [App] Related Links: [CLIP] [Celeba] The aim of the game, as in the original one, is to find a specific image from a gr

Arnau - DIMAI 3 Jan 04, 2022
Low-dose Digital Mammography with Deep Learning

Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography ====== This repository contains

WANG-AXIS 6 Dec 13, 2022
Evaluating AlexNet features at various depths

Linear Separability Evaluation This repo provides the scripts to test a learned AlexNet's feature representation performance at the five different con

Yuki M. Asano 32 Dec 30, 2022
DiSECt: Differentiable Simulator for Robotic Cutting

DiSECt: Differentiable Simulator for Robotic Cutting Website | Paper | Dataset | Video | Blog post DiSECt is a simulator for the cutting of deformable

NVIDIA Research Projects 73 Oct 29, 2022