Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Overview

Do Not Trust Prediction Scores for Membership Inference Attacks

False-Positive Examples

Abstract: Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model. Knowing this may indeed lead to a privacy breach. Arguably, most MIAs, however, make use of the model's prediction scores---the probability of each output given some input---following the intuition that the trained model tends to behave differently on its training data. We argue that this is a fallacy for many modern deep network architectures, e.g., ReLU type neural networks produce almost always high prediction scores far away from the training data. Consequently, MIAs will miserably fail since this behavior leads to high false-positive rates not only on known domains but also on out-of-distribution data and implicitly acts as a defense against MIAs. Specifically, using generative adversarial networks, we are able to produce a potentially infinite number of samples falsely classified as part of the training data. In other words, the threat of MIAs is overestimated and less information is leaked than previously assumed. Moreover, there is actually a trade-off between the overconfidence of classifiers and their susceptibility to MIAs: the more classifiers know when they do not know, making low confidence predictions far away from the training data, the more they reveal the training data.
Arxiv Preprint (PDF)

Membership Inference Attacks

Membership Inference Attacks


Membership Inference Attack Preparation Process

In a general MIA setting, as usually assumed in the literature, an adversary is given an input x following distribution D and a target model which was trained on a training set with size S_train consisting of samples from D. The adversary is then facing the problem to identify whether a given x following D was part of the training set S_train. To predict the membership of x, the adversary creates an inference model h. In score-based MIAs, the input to h is the prediction score vector produced by the target model on sample x (see first figure above). Since MIAs are binary classification problems, precision, recall and false-positive rate (FPR) are used as attack evaluation metrics.

All MIAs exploit a difference in the behavior of the target model on seen and unseen data. Most attacks in the literature follow Shokri et al. and train so-called shadow models shadow models on a disjoint dataset S_shadow drawn from the same distribution D as S_train. The shadow model is used to mimic the behavior of the target model and adjust parameters of h, such as threshold values or model weights. Note that the membership status for inputs to the shadow models are known to the adversary (see second figure above).

Setup and Run Experiments

Setup StyleGAN2-ADA

To recreate our Fake datasets containing synthetic CIFAR-10 and Stanford Dog images, you need to clone the official StyleGAN-2-Pytorch repo into the folder datasets.

cd datasets
git clone https://github.com/NVlabs/stylegan2-ada-pytorch.git
rm -r --force stylegan2-ada-pytorch/.git/

You can also safely remove all folders in the /datasets/stylegan2-ada-pytorch folder but /dnnlib and /torch_utils.

Setup Docker Container

To build the Docker container run the following script:

./docker_build.sh -n confidence_mi

To start the docker container run the following command from the project's root:

docker run --rm --shm-size 16G --name my_confidence_mi --gpus '"device=0"' -v $(pwd):/workspace/confidences -it confidence_mi bash

Download Trained Models

We provide our trained models on which we performed our experiments. To automatically download and extract the files use the following command:

bash download_pretrained_models.sh

To manually download single models, please visit https://hessenbox.tu-darmstadt.de/getlink/fiBg5znMtAagRe58sCrrLtyg/pretrained_models.

Reproduce Results from the Paper

All our experiments based on CIFAR-10 and Stanford Dogs can be reproduced using the pre-trained models by running the following scripts:

python experiments/cifar10_experiments.py
python experiments/stanford_dogs_experiments.py

If you want to train the models from scratch, the following commands can be used:

python experiments/cifar10_experiments.py --train
python experiments/stanford_dogs_experiments.py --train --pretrained

We use command line arguments to specify the hyperparameters of the training and attacking process. Default values correspond to the parameters used for training the target models as stated in the paper. The same applies for the membership inference attacks. To train models with label smoothing, L2 or LLLA, run the experiments with --label_smoothing, --weight_decay or --llla. We set the seed to 42 (default value) for all experiments. For further command line arguments and details, please refer to the python files.

Attack results will be stored in csv files at /experiments/results/{MODEL_ARCH}_{DATASET_NAME}_{MODIFIERS}_attack_results.csv and state precision, recall, fpr and mmps values for the various input datasets and membership inference attacks. Results for training the target and shadow models will be stored in the first column at /experiments/results/{MODEL_ARCH}_{DATASET_NAME}_{MODIFIERS}_performance_results.csv. They state the training and test accuracy, as well as the ECE.

Datasets

All data is required to be located in /data/. To recreate the Fake datasets using StyleGAN2-ADA to generate CIFAR-10 and dog samples, use /datasets/fake_cifar10.py and /datasets/fake_dogs.py. For example, Fake Dogs samples are located at /data/fake_afhq_dogs/Images after generation. If the files are missing or corrupted (checked by MD5 checksum), the images will be regenerated to restore the identical datasets used in the paper. This process will be automatically called when running one of the experiments. We use various datasets in our experiments. The following figure gives a short overview over the content and visual styles of the datasets.

Membership Inference Attacks

Citation

If you build upon our work, please don't forget to cite us.

@misc{hintersdorf2021trust,
      title={Do Not Trust Prediction Scores for Membership Inference Attacks}, 
      author={Dominik Hintersdorf and Lukas Struppek and Kristian Kersting},
      year={2021},
      eprint={2111.09076},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Implementation Credits

Some of our implementations rely on other repos. We want to thank the authors for making their code publicly available. For license details refer to the corresponding files in our repo. For more details on the specific functionality, please visit the corresponding repos.

Owner
[email protected]
Machine Learning Group at TU Darmstadt
<a href=[email protected]">
This repository accompanies the ACM TOIS paper "What can I cook with these ingredients?" - Understanding cooking-related information needs in conversational search

In this repository you find data that has been gathered when conducting in-situ experiments in a conversational cooking setting. These data include tr

6 Sep 22, 2022
A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Facebook Research 536 Jan 06, 2023
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
Using fully convolutional networks for semantic segmentation with caffe for the cityscapes dataset

Using fully convolutional networks for semantic segmentation (Shelhamer et al.) with caffe for the cityscapes dataset How to get started Download the

Simon Guist 27 Jun 06, 2022
The code for "Deep Level Set for Box-supervised Instance Segmentation in Aerial Images".

Deep Levelset for Box-supervised Instance Segmentation in Aerial Images Wentong Li, Yijie Chen, Wenyu Liu, Jianke Zhu* Any questions or discussions ar

sunshine.lwt 112 Jan 05, 2023
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

Aiden Nibali 36 Oct 30, 2022
RADIal is available now! Check the download section

Latest news: RADIal is available now! Check the download section. However, because we are currently working on the data anonymization, we provide for

valeo.ai 55 Jan 03, 2023
Hide screen when boss is approaching.

BossSensor Hide your screen when your boss is approaching. Demo The boss stands up. He is approaching. When he is approaching, the program fetches fac

Hiroki Nakayama 6.2k Jan 07, 2023
Towards Debiasing NLU Models from Unknown Biases

Towards Debiasing NLU Models from Unknown Biases Abstract: NLU models often exploit biased features to achieve high dataset-specific performance witho

Ubiquitous Knowledge Processing Lab 22 Jun 14, 2022
Fang Zhonghao 13 Nov 19, 2022
Discovering and Achieving Goals via World Models

Discovering and Achieving Goals via World Models [Project Website] [Benchmark Code] [Video (2min)] [Oral Talk (13min)] [Paper] Russell Mendonca*1, Ole

Oleg Rybkin 71 Dec 22, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction) by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Lo

Hengshuang Zhao 217 Oct 30, 2022
Boosted neural network for tabular data

XBNet - Xtremely Boosted Network Boosted neural network for tabular data XBNet is an open source project which is built with PyTorch which tries to co

Tushar Sarkar 175 Jan 04, 2023
Efficient Sparse Attacks on Videos using Reinforcement Learning

EARL This repository provides a simple implementation of the work "Efficient Sparse Attacks on Videos using Reinforcement Learning" Example: Demo: Her

12 Dec 05, 2021
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
A Deep Reinforcement Learning Framework for Stock Market Trading

DQN-Trading This is a framework based on deep reinforcement learning for stock market trading. This project is the implementation code for the two pap

61 Jan 01, 2023
This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your username and app/website.

PasswordGeneratorAndVault This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your us

Chris 1 Feb 26, 2022
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

4.2k Jan 01, 2023
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

ALBERT ***************New March 28, 2020 *************** Add a colab tutorial to run fine-tuning for GLUE datasets. ***************New January 7, 2020

Google Research 3k Jan 01, 2023