Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Overview

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation

This is a pytorch project for the paper Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation by Xiaogang Xu, Hengshuang Zhao and Jiaya Jia presented at ICCV2021.

paper link, arxiv

Introduction

Adversarial training is promising for improving the robustness of deep neural networks towards adversarial perturbations, especially on the classification task. The effect of this type of training on semantic segmentation, contrarily, just commences. We make the initial attempt to explore the defense strategy on semantic segmentation by formulating a general adversarial training procedure that can perform decently on both adversarial and clean samples. We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect, by setting additional branches in the target model during training, and dealing with pixels with diverse properties towards adversarial perturbation. Our dynamical division mechanism divides pixels into multiple branches automatically. Note all these additional branches can be abandoned during inference and thus leave no extra parameter and computation cost. Extensive experiments with various segmentation models are conducted on PASCAL VOC 2012 and Cityscapes datasets, in which DDC-AT yields satisfying performance under both white- and black-box attacks.

Project Setup

For multiprocessing training, we use apex, tested with pytorch 1.0.1.

First install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
git clone --recursive [email protected]:dvlab-research/Robust_Semantic_Segmentation.git
cd Robust_Semantic_Segmentation
pip install -r requirements.txt

The environment of our experiments is CUDA10.2 and TITAN V. And you should install apex for training.

Requirement

  • Hardware: 4-8 GPUs (better with >=11G GPU memory)

Train

  • Download related datasets and you should modify the relevant paths specified in folder "config"
  • Download ImageNet pre-trained models and put them under folder initmodel for weight initialization.

Cityscapes

  • Train the baseline model with no defense on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train.sh
    
  • Train the baseline model with no defense on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train.sh
    
  • Train the model with SAT on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train_sat.sh
    
  • Train the model with SAT on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train_sat.sh
    
  • Train the model with DDCAT on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train_ddcat.sh
    
  • Train the model with DDCAT on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train_ddcat.sh
    

VOC2012

  • Train the baseline model with no defense on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train.sh
    
  • Train the baseline model with no defense on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train.sh
    
  • Train the model with SAT on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train_sat.sh
    
  • Train the model with SAT on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train_sat.sh
    
  • Train the model with DDCAT on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train_ddcat.sh
    
  • Train the model with DDCAT on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train_ddcat.sh
    

You can use the tensorboardX to visualize the training loss, by

tensorboard --logdir=exp/path_to_log

Test

We provide the script for evaluation, reporting the miou on both clean and adversarial samples (the adversarial samples are obtained with attack whose n=2, epsilon=0.03 x 255, alpha=0.01 x 255)

Cityscapes

  • Evaluate the PSPNet trained with no defense on Cityscapes
    sh tool_test/cityscapes/psp_test.sh
    
  • Evaluate the PSPNet trained with SAT on Cityscapes
    sh tool_test/cityscapes/psp_test_sat.sh
    
  • Evaluate the PSPNet trained with DDCAT on Cityscapes
    sh tool_test/cityscapes/psp_test_ddcat.sh
    
  • Evaluate the DeepLabv3 trained with no defense on Cityscapes
    sh tool_test/cityscapes/aspp_test.sh
    
  • Evaluate the DeepLabv3 trained with SAT on Cityscapes
    sh tool_test/cityscapes/aspp_test_sat.sh
    
  • Evaluate the DeepLabv3 trained with DDCAT on Cityscapes
    sh tool_test/cityscapes/aspp_test_ddcat.sh
    

VOC2012

  • Evaluate the PSPNet trained with no defense on VOC2012
    sh tool_test/voc2012/psp_test.sh
    
  • Evaluate the PSPNet trained with SAT on VOC2012
    sh tool_test/voc2012/psp_test_sat.sh
    
  • Evaluate the PSPNet trained with DDCAT on VOC2012
    sh tool_test/voc2012/psp_test_ddcat.sh
    
  • Evaluate the DeepLabv3 trained with no defense on VOC2012
    sh tool_test/voc2012/aspp_test.sh
    
  • Evaluate the DeepLabv3 trained with SAT on VOC2012
    sh tool_test/voc2012/aspp_test_sat.sh
    
  • Evaluate the DeepLabv3 trained with DDCAT on VOC2012
    sh tool_test/voc2012/aspp_test_ddcat.sh
    

Pretrained Model

You can download the pretrained models from https://drive.google.com/file/d/120xLY_pGZlm3tqaLxTLVp99e06muBjJC/view?usp=sharing

Cityscapes with PSPNet

The model trained with no defense: pretrain/cityscapes/pspnet/no_defense
The model trained with SAT: pretrain/cityscapes/pspnet/sat
The model trained with DDCAT: pretrain/cityscapes/pspnet/ddcat

Cityscapes with DeepLabv3

The model trained with no defense: pretrain/cityscapes/deeplabv3/no_defense
The model trained with SAT: pretrain/cityscapes/deeplabv3/sat
The model trained with DDCAT: pretrain/cityscapes/deeplabv3/ddcat

VOC2012 with PSPNet

The model trained with no defense: pretrain/voc2012/pspnet/no_defense
The model trained with SAT: pretrain/voc2012/pspnet/sat
The model trained with DDCAT: pretrain/voc2012/pspnet/ddcat

VOC2012 with DeepLabv3

The model trained with no defense: pretrain/voc2012/deeplabv3/no_defense
The model trained with SAT: pretrain/voc2012/deeplabv3/sat
The model trained with DDCAT: pretrain/voc2012/deeplabv3/ddcat

Citation Information

If you find the project useful, please cite:

@inproceedings{xu2021ddcat,
  title={Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation},
  author={Xiaogang Xu, Hengshuang Zhao and Jiaya Jia},
  booktitle={ICCV},
  year={2021}
}

Acknowledgments

This source code is inspired by semseg.

Contributions

If you have any questions/comments/bug reports, feel free to e-mail the author Xiaogang Xu ([email protected]).

Owner
DV Lab
Deep Vision Lab
DV Lab
Recursive Bayesian Networks

Recursive Bayesian Networks This repository contains the code to reproduce the results from the NeurIPS 2021 paper Lieck R, Rohrmeier M (2021) Recursi

Robert Lieck 11 Oct 18, 2022
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

81 Dec 28, 2022
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.

openpifpaf Continuously tested on Linux, MacOS and Windows: New 2021 paper: OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Te

VITA lab at EPFL 50 Dec 29, 2022
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
[NeurIPS 2021] "Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks" by Yonggan Fu, Qixuan Yu, Yang Zhang, Shang Wu, Xu Ouyang, David Cox, Yingyan Lin

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks Yonggan Fu, Qixuan Yu, Yang Zhang, S

12 Dec 11, 2022
This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637

This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637 Dependencies The model depends on the foll

Jörg Encke 2 Oct 14, 2022
The audio-video synchronization of MKV Container Format is exploited to achieve data hiding

The audio-video synchronization of MKV Container Format is exploited to achieve data hiding, where the hidden data can be utilized for various management purposes, including hyper-linking, annotation

Maxim Zaika 1 Nov 17, 2021
Codes for AAAI 2022 paper: Context-aware Health Event Prediction via Transition Functions on Dynamic Disease Graphs

Context-Aware-Healthcare Codes for AAAI 2022 paper: Context-aware Health Event Prediction via Transition Functions on Dynamic Disease Graphs Download

LuChang 9 Dec 26, 2022
Collect super-resolution related papers, data, repositories

Collect super-resolution related papers, data, repositories

WangChaofeng 1.7k Jan 03, 2023
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud.

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. related paper: Lidar with Velocity : Motion

ISEE Research Group 164 Dec 30, 2022
A whale detector design for the Kaggle whale-detector challenge!

CNN (InceptionV1) + STFT based Whale Detection Algorithm So, this repository is my PyTorch solution for the Kaggle whale-detection challenge. The obje

Tarin Ziyaee 92 Sep 28, 2021
Python code for the paper How to scale hyperparameters for quickshift image segmentation

How to scale hyperparameters for quickshift image segmentation Python code for the paper How to scale hyperparameters for quickshift image segmentatio

0 Jan 25, 2022
Contains code for Deep Kernelized Dense Geometric Matching

DKM - Deep Kernelized Dense Geometric Matching Contains code for Deep Kernelized Dense Geometric Matching We provide pretrained models and code for ev

Johan Edstedt 83 Dec 23, 2022
OptaPlanner wrappers for Python. Currently significantly slower than OptaPlanner in Java or Kotlin.

OptaPy is an AI constraint solver for Python to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference S

OptaPy 211 Jan 02, 2023
An Open-Source Package for Information Retrieval.

OpenMatch An Open-Source Package for Information Retrieval. 😃 What's New Top Spot on TREC-COVID Challenge (May 2020, Round2) The twin goals of the ch

THUNLP 439 Dec 27, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
Understanding the Generalization Benefit of Model Invariance from a Data Perspective

Understanding the Generalization Benefit of Model Invariance from a Data Perspective This is the code for our NeurIPS2021 paper "Understanding the Gen

1 Jan 15, 2022