Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)

Overview

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Pouya Samangouei*, Maya Kabkab*, Rama Chellappa

[*: authors contributed equally]

This repository contains the implementation of our ICLR-18 paper: Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

If you find this code or the paper useful, please consider citing:

@inproceedings{defensegan,
  title={Defense-GAN: Protecting classifiers against adversarial attacks using generative models},
  author={Samangouei, Pouya and Kabkab, Maya and Chellappa, Rama},
  booktitle={International Conference on Learning Representations},
  year={2018}
}

alt text alt text

Contents

  1. Installation
  2. Usage

Installation

  1. Clone this repository:
git clone --recursive https://github.com/kabkabm/defensegan
cd defensegan
git submodule update --init --recursive
  1. Install requirements:
pip install -r requirements.txt

Note: if you don't have a GPU install the cpu version of TensorFlow 1.7.

  1. Download the dataset and prepare data directory:
python download_dataset.py [mnist|f-mnist|celeba]
  1. Create or link output and debug directories:
mkdir output
mkdir debug

or

ln -s <path-to-output> output
ln -s <path-to-debug> debug

Usage

Train a GAN model

python train.py --cfg <path> --is_train <extra-args>
  • --cfg This can be set to either a .yml configuration file like the ones in experiments/cfgs, or an output directory path.
  • <extra-args> can be any parameter that is defined in the config file.

The training will create a directory in the output directory per experiment with the same name as to save the model checkpoints. If <extra-args> are different from the ones that are defined in <config>, the output directory name will reflect the difference.

A config file is saved into each experiment directory so that they can be loaded if <path> is the address to that directory.

Example

After running

python train.py --cfg experiments/cfgs/gans/mnist.yml --is_train

output/gans/mnist will be created.

[optional] Save reconstructions and datasets into cache:

python train.py --cfg experiments/cfgs/<config> --save_recs
python train.py --cfg experiments/cfgs/<config> --save_ds

Example

After running the training code for mnist, the reconstructions and the dataset can be saved with:

python train.py --cfg output/gans/mnist --save_recs
python train.py --cfg output/gans/mnist --save_ds

As training goes on, sample outputs of the generator are written to debug/gans/<model_config>.

Black-box attacks

To perform black-box experiments run blackbox.py [Table 1 and 2 of the paper]:

python blackbox.py --cfg <path> \
    --results_dir <results_path> \
    --bb_model {A, B, C, D, E} \
    --sub_model {A, B, C, D, E} \
    --fgsm_eps <epsilon> \
    --defense_type {none|defense_gan|adv_tr}
    [--train_on_recs or --online_training]
    <optional-arguments>
  • --cfg is the path to the config file for training the iWGAN. This can also be the path to the output directory of the model.

  • --results_dir The path where the final results are saved in text files.

  • --bb_model The black-box model architectures that are used in Table 1 and Table 2.

  • --sub_model The substitute model architectures that are used in Table 1 and Table 2.

  • --defense_type specifies the type of defense to protect the classifier.

  • --train_on_recs or --online_training These parameters are optional. If they are set, the classifier will be trained on the reconstructions of Defense-GAN (e.g. in column Defense-GAN-Rec of Table 1 and 2). Otherwise, the results are for Defense-GAN-Orig. Note --online_training will take a while if --rec_iters, or L in the paper, is set to a large value.

  • <optional-arguments> A list of --<arg_name> <arg_val> that are the same as the hyperparemeters that are defined in config files (all lower case), and also a list of flags in blackbox.py. The most important ones are:

    • --rec_iters The number of GD reconstruction iterations for Defense-GAN, or L in the paper.
    • --rec_lr The learning rate of the reconstruction step.
    • --rec_rr The number of random restarts for the reconstruction step, or R in the paper.
    • --num_train The number of images to train the black-box model on. For debugging purposes set this to a small value.
    • --num_test The number of images to test on. For debugging purposes set this to a small value.
    • --debug This will save qualitative attack and reconstruction results in debug directory and will not run the adversarial attack part of the code.
  • Refer to blackbox.py for more flag descriptions.

Example

  • Row 1 of Table 1 Defense-GAN-Orig:
python blackbox.py --cfg output/gans/mnist \
    --results_dir defensegan \
    --bb_model A \
    --sub_model B \
    --fgsm_eps 0.3 \
    --defense_type defense_gan
  • If you set --nb_epochs 1 --nb_epochs_s 1 --data_aug 1 you will get a quick glance of how the script works.

White-box attacks

To test Defense-GAN for white-box attacks run whitebox.py [Tables 4, 5, 12 of the paper]:

python whitebox.py --cfg <path> \
       --results_dir <results-dir> \
       --attack_type {fgsm, rand_fgsm, cw} \
       --defense_type {none|defense_gan|adv_tr} \
       --model {A, B, C, D} \
       [--train_on_recs or --online_training]
       <optional-arguments>
  • --cfg is the path to the config file for training the iWGAN. This can also be the path to the output directory of the model.
  • --results_dir The path where the final results are saved in text files.
  • --defense_type specifies the type of defense to protect the classifier.
  • --train_on_recs or --online_training These parameters are optional. If they are set, the classifier will be trained on the reconstructions of Defense-GAN (e.g. in column Defense-GAN-Rec of Table 1 and 2). Otherwise, the results are for Defense-GAN-Orig. Note --online_training will take a while if --rec_iters, or L in the paper, is set to a large value.
  • <optional-arguments> A list of --<arg_name> <arg_val> that are the same as the hyperparemeters that are defined in config files (all lower case), and also a list of flags in whitebox.py. The most important ones are:
    • --rec_iters The number of GD reconstruction iterations for Defense-GAN, or L in the paper.
    • --rec_lr The learning rate of the reconstruction step.
    • --rec_rr The number of random restarts for the reconstruction step, or R in the paper.
    • --num_test The number of images to test on. For debugging purposes set this to a small value.
  • Refer to whitebox.py for more flag descriptions.

Example

First row of Table 4:

python whitebox.py --cfg <path> \
       --results_dir whitebox \
       --attack_type fgsm \
       --defense_type defense_gan \
       --model A
  • If you want to quickly see how the scripts work, add the following flags:
--nb_epochs 1 --num_tests 400
Owner
Maya Kabkab
Maya Kabkab
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Gait3D-Benchmark This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild

82 Jan 04, 2023
Swapping face using Face Mesh with TensorFlow Lite

Swapping face using Face Mesh with TensorFlow Lite

iwatake 17 Apr 26, 2022
Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Jian Zhang 20 Oct 24, 2022
A compendium of useful, interesting, inspirational usage of pandas functions, each example will be an ipynb file

Pandas_by_examples A compendium of useful/interesting/inspirational usage of pandas functions, each example will be an ipynb file What is this reposit

Guangyuan(Frank) Li 32 Nov 20, 2022
A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"

LASAFT-Net-v2 Listen, Attend and Separate by Attentively aggregating Frequency Transformation Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chun

Woosung Choi 29 Jun 04, 2022
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.

Image Super-Resolution (ISR) The goal of this project is to upscale and improve the quality of low resolution images. This project contains Keras impl

idealo 4k Jan 08, 2023
Training DiffWave using variational method from Variational Diffusion Models.

Variational DiffWave Training DiffWave using variational method from Variational Diffusion Models. Quick Start python train_distributed.py discrete_10

Chin-Yun Yu 26 Dec 13, 2022
🤗 Push your spaCy pipelines to the Hugging Face Hub

spacy-huggingface-hub: Push your spaCy pipelines to the Hugging Face Hub This package provides a CLI command for uploading any trained spaCy pipeline

Explosion 30 Oct 09, 2022
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration

MVP Benchmark: Multi-View Partial Point Clouds for Completion and Registration [NEWS] 2021-07-12 [NEW 🎉 ] The submission on Codalab starts! 2021-07-1

PL 93 Dec 21, 2022
This program writes christmas wish programmatically. It is using turtle as a pen pointer draw christmas trees and stars.

Introduction This is a simple program is written in python and turtle library. The objective of this program is to wish merry Christmas programmatical

Gunarakulan Gunaretnam 1 Dec 25, 2021
Scheme for training and applying a label propagation framework

Factorisation-based Image Labelling Overview This is a scheme for training and applying the factorisation-based image labelling (FIL) framework. Some

Wellcome Centre for Human Neuroimaging 2 Dec 17, 2021
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 04, 2022
Informal Persian Universal Dependency Treebank

Informal Persian Universal Dependency Treebank (iPerUDT) Informal Persian Universal Dependency Treebank, consisting of 3000 sentences and 54,904 token

Roya Kabiri 0 Jan 05, 2022
Securetar - A streaming wrapper around python tarfile and allow secure handling files and support encryption

Secure Tar Secure Tarfile library It's a streaming wrapper around python tarfile

Pascal Vizeli 2 Dec 09, 2022
Magic tool for managing internet connection in local network by @zalexdev

Megacut ✂️ A new powerful Python3 tool for managing internet on a local network Installation git clone https://github.com/stryker-project/megacut cd m

Stryker 12 Dec 15, 2022
Gesture recognition on Event Data

Event based Gesture Recognition Gesture recognition on Event Data usually involv

2 Feb 14, 2022
CBKH: The Cornell Biomedical Knowledge Hub

Cornell Biomedical Knowledge Hub (CBKH) CBKG integrates data from 18 publicly available biomedical databases. The current version of CBKG contains a t

44 Dec 21, 2022
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
Official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT This repository is the official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. ArXiv If

International Business Machines 168 Dec 29, 2022