Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Overview

Hold me tight! Influence of discriminative features on deep network boundaries

This is the source code to reproduce the experiments of the NeurIPS 2020 paper "Hold me tight! Influence of discriminative features on deep network boundaries" by Guillermo Ortiz-Jimenez*, Apostolos Modas*, Seyed-Mohsen Moosavi-Dezfooli and Pascal Frossard.

Abstract

Important insights towards the explainability of neural networks reside in the characteristics of their decision boundaries. In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary. This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets. We use this framework to reveal some intriguing properties of CNNs. Specifically, we rigorously confirm that neural networks exhibit a high invariance to non-discriminative features, and show that very small perturbations of the training samples in certain directions can lead to sudden invariances in the orthogonal ones. This is precisely the mechanism that adversarial training uses to achieve robustness.

Dependencies

To run our code on a Linux machine with a GPU, install the Python packages in a fresh Anaconda environment:

$ conda env create -f environment.yml
$ conda activate hold_me_tight

Experiments

This repository contains code to reproduce the following experiments:

You can reproduce this experiments separately using their individual scripts, or have a look at the comprehensive Jupyter notebook.

Pretrained architectures

We also provide a set of pretrained models that we used in our experiments. The exact hyperparameters and settings can be found in the Supplementary material of the paper. All the models are publicly available and can be downloaded from here. In order to execute the scripts using the pretrained models, it is recommended to download them and save them under the Models/Pretrained/ directory.

Architecture Dataset Training method
LeNet MNIST Standard
ResNet18 MNIST Standard
ResNet18 CIFAR10 Standard
VGG19 CIFAR10 Standard
DenseNet121 CIFAR10 Standard
LeNet Flipped MNIST Standard + Frequency flip
ResNet18 Flipped MNIST Standard + Frequency flip
ResNet18 Flipped CIFAR10 Standard + Frequency flip
VGG19 Flipped CIFAR10 Standard + Frequency flip
DenseNet121 Flipped CIFAR10 Standard + Frequency flip
ResNet50 Flipped ImageNet Standard + Frequency flip
ResNet18 Low-pass CIFAR10 Standard + Low-pass filtering
VGG19 Low-pass CIFAR10 Standard + Low-pass filtering
DenseNet121 Low-pass CIFAR10 Standard + Low-pass filtering
Robust LeNet MNIST L2 PGD adversarial training (eps = 2)
Robust ResNet18 MNIST L2 PGD adversarial training (eps = 2)
Robust ResNet18 CIFAR10 L2 PGD adversarial training (eps = 1)
Robust VGG19 CIFAR10 L2 PGD adversarial training (eps = 1)
Robust DenseNet121 CIFAR10 L2 PGD adversarial training (eps = 1)
Robust ResNet50 ImageNet L2 PGD adversarial training (eps = 3) (copied from here)
Robust LeNet Flipped MNIST L2 PGD adversarial training (eps = 2) with Dykstra projection + Frequency flip
Robust ResNet18 Flipped MNIST L2 PGD adversarial training (eps = 2) with Dykstra projection + Frequency flip
Robust ResNet18 Flipped CIFAR10 L2 PGD adversarial training (eps = 1) with Dykstra projection + Frequency flip
Robust VGG19 Flipped CIFAR10 L2 PGD adversarial training (eps = 1) with Dykstra projection + Frequency flip
Robust DenseNet121 Flipped CIFAR10 L2 PGD adversarial training (eps = 1) with Dykstra projection + Frequency flip

Reference

If you use this code, or some of the attached models, please cite the following paper:

@InCollection{OrtizModasHMT2020,
  TITLE = {{Hold me tight! Influence of discriminative features on deep network boundaries}},
  AUTHOR = {{Ortiz-Jimenez}, Guillermo and {Modas}, Apostolos and {Moosavi-Dezfooli}, Seyed-Mohsen and Frossard, Pascal},
  BOOKTITLE = {Advances in Neural Information Processing Systems 34},
  MONTH = dec,
  YEAR = {2020}
}
LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

LightningFSL: Few-Shot Learning with Pytorch-Lightning In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, inc

Xu Luo 76 Dec 11, 2022
Official implementation of Self-supervised Image-to-text and Text-to-image Synthesis

Self-supervised Image-to-text and Text-to-image Synthesis This is the official implementation of Self-supervised Image-to-text and Text-to-image Synth

6 Jul 31, 2022
Hydra: an Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems

Hydra: An Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems Paper Finding Semantic Bugs in File Systems with an Extensible Fuzzin

gts3.org (<a href=[email protected])"> 129 Dec 15, 2022
Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization

Hybrid solving process for combinatorial optimization problems Combinatorial optimization has found applications in numerous fields, from aerospace to

117 Dec 13, 2022
The official homepage of the COCO-Stuff dataset.

The COCO-Stuff dataset Holger Caesar, Jasper Uijlings, Vittorio Ferrari Welcome to official homepage of the COCO-Stuff [1] dataset. COCO-Stuff augment

Holger Caesar 715 Dec 31, 2022
a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Ullrich Koethe 378 Dec 30, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.

Swin Transformer for Semantic Segmentation of satellite images This repo contains the supported code and configuration files to reproduce semantic seg

23 Oct 10, 2022
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 05, 2023
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch

Omninet - Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be atte

Phil Wang 48 Nov 21, 2022
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
Official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right"

Surface Form Competition This is the official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right" We p

Peter West 46 Dec 23, 2022
Distributed DataLoader For Pytorch Based On Ray

Dpex——用户无感知分布式数据预处理组件 一、前言 随着GPU与CPU的算力差距越来越大以及模型训练时的预处理Pipeline变得越来越复杂,CPU部分的数据预处理已经逐渐成为了模型训练的瓶颈所在,这导致单机的GPU配置的提升并不能带来期望的线性加速。预处理性能瓶颈的本质在于每个GPU能够使用的C

Dalong 23 Nov 02, 2022
Air Quality Prediction Using LSTM

AirQualityPredictionUsingLSTM In this Repo, i present to you the winning solution of smart gujarat hackathon 2019 where the task was to predict the qu

Deepak Nandwani 2 Dec 13, 2022
Official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space

NeuralFusion This is the official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space. We provide code to train the proposed pipel

53 Jan 01, 2023
Framework for evaluating ANNS algorithms on billion scale datasets.

Billion-Scale ANN http://big-ann-benchmarks.com/ Install The only prerequisite is Python (tested with 3.6) and Docker. Works with newer versions of Py

Harsha Vardhan Simhadri 132 Dec 24, 2022
Covid-19 Test AI (Deep Learning - NNs) Software. Accuracy is the %96.5, loss is the 0.09 :)

Covid-19 Test AI (Deep Learning - NNs) Software I developed a segmentation algorithm to understand whether Covid-19 Test Photos are positive or negati

Emirhan BULUT 28 Dec 04, 2021
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023