Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Overview

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Yonghao Xu and Pedram Ghamisi


This research has been conducted at the Institute of Advanced Research in Artificial Intelligence (IARAI).

This is the official PyTorch implementation of the black-box adversarial attack methods for remote sensing data in our paper Universal adversarial examples in remote sensing: Methodology and benchmark.

Table of content

  1. Dataset
  2. Supported methods and models
  3. Preparation
  4. Adversarial attacks on scene classification
  5. Adversarial attacks on semantic segmentation
  6. Performance evaluation on the UAE-RS dataset
  7. Paper
  8. Acknowledgement
  9. License

Dataset

We collect the generated universal adversarial examples in the dataset named UAE-RS, which is the first dataset that provides black-box adversarial samples in the remote sensing field.

πŸ“‘ Download links:  Google Drive        Baidu NetDisk (Code: 8g1r)

To build UAE-RS, we use the Mixcut-Attack method to attack ResNet18 with 1050 test samples from the UCM dataset and 5000 test samples from the AID dataset for scene classification, and use the Mixup-Attack method to attack FCN-8s with 5 test images from the Vaihingen dataset (image IDs: 11, 15, 28, 30, 34) and 5 test images from the Zurich Summer dataset (image IDs: 16, 17, 18, 19, 20) for semantic segmentation.

Example images in the UCM dataset and the corresponding adversarial examples in the UAE-RS dataset.

Example images in the AID dataset and the corresponding adversarial examples in the UAE-RS dataset.

Qualitative results of the black-box adversarial attacks from FCN-8s β†’ SegNet on the Vaihingen dataset.

(a) The original clean test images in the Vaihingen dataset. (b) The corresponding adversarial examples in the UAE-RS dataset. (c) Segmentation results of SegNet on the clean images. (d) Segmentation results of SegNet on the adversarial images. (e) Ground-truth annotations.

Supported methods and models

This repo contains implementations of black-box adversarial attacks for remote sensing data on both scene classification and semantic segmentation tasks.

Preparation

  • Package requirements: The scripts in this repo are tested with torch==1.10 and torchvision==0.11 using two NVIDIA Tesla V100 GPUs.
  • Remote sensing datasets used in this repo:
  • Data folder structure
    • The data folder is structured as follows:
β”œβ”€β”€ <THE-ROOT-PATH-OF-DATA>/
β”‚   β”œβ”€β”€ UCMerced_LandUse/     
|   |   β”œβ”€β”€ Images/
|   |   |   β”œβ”€β”€ agricultural/
|   |   |   β”œβ”€β”€ airplane/
|   |   |   |── ...
β”‚   β”œβ”€β”€ AID/     
|   |   β”œβ”€β”€ Airport/
|   |   β”œβ”€β”€ BareLand/
|   |   |── ...
β”‚   β”œβ”€β”€ Vaihingen/     
|   |   β”œβ”€β”€ img/
|   |   β”œβ”€β”€ gt/
|   |   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ Zurich/    
|   |   β”œβ”€β”€ img/
|   |   β”œβ”€β”€ gt/
|   |   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ UAE-RS/    
|   |   β”œβ”€β”€ UCM/
|   |   β”œβ”€β”€ AID/
|   |   β”œβ”€β”€ Vaihingen/
|   |   β”œβ”€β”€ Zurich/
  • Pretraining the models for scene classification
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'alexnet' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'resnet18' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'inception' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
...
  • Pretraining the models for semantic segmentation
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'fcn8s' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'deeplabv2' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'segnet' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
...

Please replace <THE-ROOT-PATH-OF-DATA> with the local path where you store the remote sensing datasets.

Adversarial attacks on scene classification

  • Generate adversarial examples:
CUDA_VISIBLE_DEVICES=0 python attack_cls.py --surrogate_model 'resnet18' \
                                            --attack_func 'fgsm' \
                                            --dataID 1 \
                                            --root_dir <THE-ROOT-PATH-OF-DATA>
  • Performance evaluation on the adversarial test set:
CUDA_VISIBLE_DEVICES=0 python test_cls.py --surrogate_model 'resnet18' \
                                          --target_model 'inception' \
                                          --attack_func 'fgsm' \
                                          --dataID 1 \
                                          --root_dir <THE-ROOT-PATH-OF-DATA>

You can change parameters --surrogate_model, --attack_func, and --target_model to evaluate the performance with different attacking scenarios.

Adversarial attacks on semantic segmentation

  • Generate adversarial examples:
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python attack_seg.py --surrogate_model 'fcn8s' \
                                            --attack_func 'fgsm' \
                                            --dataID 1 \
                                            --root_dir <THE-ROOT-PATH-OF-DATA>
  • Performance evaluation on the adversarial test set:
CUDA_VISIBLE_DEVICES=0 python test_seg.py --surrogate_model 'fcn8s' \
                                          --target_model 'segnet' \
                                          --attack_func 'fgsm' \
                                          --dataID 1 \
                                          --root_dir <THE-ROOT-PATH-OF-DATA>

You can change parameters --surrogate_model, --attack_func, and --target_model to evaluate the performance with different attacking scenarios.

Performance evaluation on the UAE-RS dataset

  • Scene classification:
CUDA_VISIBLE_DEVICES=0 python test_cls_uae_rs.py --target_model 'inception' \
                                                 --dataID 1 \
                                                 --root_dir <THE-ROOT-PATH-OF-DATA>

Scene classification results of different deep neural networks on the clean and UAE-RS test sets:

UCM AID
Model Clean Test Set Adversarial Test Set OA Gap Clean Test Set Adversarial Test Set OA Gap
AlexNet 90.28 30.86 -59.42 89.74 18.26 -71.48
VGG11 94.57 26.57 -68.00 91.22 12.62 -78.60
VGG16 93.04 19.52 -73.52 90.00 13.46 -76.54
VGG19 92.85 29.62 -63.23 88.30 15.44 -72.86
Inception-v3 96.28 24.86 -71.42 92.98 23.48 -69.50
ResNet18 95.90 2.95 -92.95 94.76 0.02 -94.74
ResNet50 96.76 25.52 -71.24 92.68 6.20 -86.48
ResNet101 95.80 28.10 -67.70 92.92 9.74 -83.18
ResNeXt50 97.33 26.76 -70.57 93.50 11.78 -81.72
ResNeXt101 97.33 33.52 -63.81 95.46 12.60 -82.86
DenseNet121 97.04 17.14 -79.90 95.50 10.16 -85.34
DenseNet169 97.42 25.90 -71.52 95.54 9.72 -85.82
DenseNet201 97.33 26.38 -70.95 96.30 9.60 -86.70
RegNetX-400MF 94.57 27.33 -67.24 94.38 19.18 -75.20
RegNetX-8GF 97.14 40.76 -56.38 96.22 19.24 -76.98
RegNetX-16GF 97.90 34.86 -63.04 95.84 13.34 -82.50
  • Semantic segmentation:
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python test_seg_uae_rs.py --target_model 'segnet' \
                                                 --dataID 1 \
                                                 --root_dir <THE-ROOT-PATH-OF-DATA>

Semantic segmentation results of different deep neural networks on the clean and UAE-RS test sets:

Vaihingen Zurich Summer
Model Clean Test Set Adversarial Test Set mF1 Gap Clean Test Set Adversarial Test Set mF1 Gap
FCN-32s 69.48 35.00 -34.48 66.26 32.31 -33.95
FCN-16s 69.70 27.02 -42.68 66.34 34.80 -31.54
FCN-8s 82.22 22.04 -60.18 79.90 40.52 -39.38
DeepLab-v2 77.04 34.12 -42.92 74.38 45.48 -28.90
DeepLab-v3+ 84.36 14.56 -69.80 82.51 62.55 -19.96
SegNet 78.70 17.84 -60.86 75.59 35.58 -40.01
ICNet 80.89 41.00 -39.89 78.87 59.77 -19.10
ContextNet 81.17 47.80 -33.37 77.89 63.71 -14.18
SQNet 81.85 39.08 -42.77 76.32 55.29 -21.03
PSPNet 83.11 21.43 -61.68 77.55 65.39 -12.16
U-Net 83.61 16.09 -67.52 80.78 56.58 -24.20
LinkNet 82.30 24.36 -57.94 79.98 48.67 -31.31
FRRNetA 84.17 16.75 -67.42 80.50 58.20 -22.30
FRRNetB 84.27 28.03 -56.24 79.27 67.31 -11.96

Paper

Universal adversarial examples in remote sensing: Methodology and benchmark

Please cite the following paper if you use the data or the code:

@article{uaers,
  title={Universal adversarial examples in remote sensing: Methodology and benchmark}, 
  author={Xu, Yonghao and Ghamisi, Pedram},
  journal={arXiv preprint arXiv:2202.07054},
  year={2022},
}

Acknowledgement

The authors would like to thank Prof. Shawn Newsam for making the UCM dataset public available, Prof. Gui-Song Xia for providing the AID dataset, the International Society for Photogrammetry and Remote Sensing (ISPRS), and the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) for providing the Vaihingen dataset, and Dr. Michele Volpi for providing the Zurich Summer dataset.

Efficient-Segmentation-Networks

segmentation_models.pytorch

Adversarial-Attacks-PyTorch

License

This repo is distributed under MIT License. The UAE-RS dataset can be used for academic purposes only.

NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

Gâktuğ Karakaşlı 16 Dec 05, 2022
A set of tools for Namebase and HNS

HNS-TOOLS A set of tools for Namebase and HNS To install: pip install -r requirements.txt To run: py main.py My Namebase referral code: http://namebas

RunDavidMC 7 Apr 08, 2022
OverFeat is a Convolutional Network-based image classifier and feature extractor.

OverFeat OverFeat is a Convolutional Network-based image classifier and feature extractor. OverFeat was trained on the ImageNet dataset and participat

593 Dec 08, 2022
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022
Alex Pashevich 62 Dec 24, 2022
PyTorch code for training MM-DistillNet for multimodal knowledge distillation

There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge MM-DistillNet is a

51 Dec 20, 2022
Code for paper: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks

Group-CAM By Zhang, Qinglong and Rao, Lu and Yang, Yubin [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the o

zhql 98 Nov 16, 2022
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)

RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019) This repository contains python (3.5.2) implementation of

Doyup Lee 222 Dec 21, 2022
OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages

OCR-Streamlit-App OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages OCR app gets an image a

Siva Prakash 5 Apr 05, 2022
Serverless proxy for Spark cluster

Hydrosphere Mist Hydrosphere Mist is a serverless proxy for Spark cluster. Mist provides a new functional programming framework and deployment model f

hydrosphere.io 317 Dec 01, 2022
PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking

Homography Decomposition Networks for Planar Object Tracking This project is the offical PyTorch implementation of HDN(Homography Decomposition Networ

CaptainHook 48 Dec 15, 2022
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
Learning Time-Critical Responses for Interactive Character Control

Learning Time-Critical Responses for Interactive Character Control Abstract This code implements the paper Learning Time-Critical Responses for Intera

Movement Research Lab 227 Dec 31, 2022
Scripts and a shader to get you started on setting up an exported Koikatsu character in Blender.

KK Blender Shader Pack A plugin and a shader to get you started with setting up an exported Koikatsu character in Blender. The plugin is a Blender add

166 Jan 01, 2023
VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning

VisualGPT Our Paper VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning Main Architecture of Our VisualGPT Downloa

Vision CAIR Research Group, KAUST 140 Dec 28, 2022
Applying CLIP to Point Cloud Recognition.

PointCLIP: Point Cloud Understanding by CLIP This repository is an official implementation of the paper 'PointCLIP: Point Cloud Understanding by CLIP'

Renrui Zhang 175 Dec 24, 2022
State-Relabeling Adversarial Active Learning

State-Relabeling Adversarial Active Learning Code for SRAAL [2020 CVPR Oral] Requirements torch = 1.6.0 numpy = 1.19.1 tqdm = 4.31.1 AL Results The

10 Jul 14, 2022
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

DeepCTR DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can

ζ΅…ζ’¦ 6.6k Jan 08, 2023
Object Database for Super Mario Galaxy 1/2.

Super Mario Galaxy Object Database Welcome to the public object database for Super Mario Galaxy and Super Mario Galaxy 2. Here, we document all object

Aurum 9 Dec 04, 2022
you can add any codes in any language by creating its respective folder (if already not available).

HACKTOBERFEST-2021-WEB-DEV Beginner-Hacktoberfest Need Your first pr for hacktoberfest 2k21 ? come on in About This is repository of Responsive Portfo

Suman Sharma 8 Oct 17, 2022