Check out the StyleGAN repo and place it in the same directory hierarchy as the present repo

Overview

Variational Model Inversion Attacks

Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani

Fig1

  • Most commands are in run_scripts.
  • We outline a few example commands here.
    • Commands below end with a suffix . Setting =0 will run code locally. =1 was used with SLURM on a computing cluster.
  • The environment variable ROOT1 was set to my home directory.

Set up task (data & pretrained models, etc.)

Check out the StyleGAN repo and place it in the same directory hierarchy as the present repo. This is used to make sure you can load and run the pretrained StyleGAN checkpoints.

For CelebA experiments:

  • Data --
    • download the "Align&Cropped Images" from the CelebA website into the directory data/img_align_celeba.
    • make sure in data/img_align_celeba, there are 000001.jpg to 202599.jpg.
    • download identity_CelebA.txt and put it in data/celeb_a.
  • Pretrained DCGAN -- download and untar this into the folder pretrained/gans/neurips2021-celeba.
  • Pretrained StyleGAN -- download and untar this into the folder pretrained/stylegan/neurips2021-celeba.
  • Pretrained Target Classifier -- download and untar this into the folder pretrained/classifiers/neurips2021-celeba.
  • Evaluation Classifier --
    • check out the InsightFace repo and place it in the same directory hierarchy as the present repo.
    • follow instructions in that repo, and download the ir_se50 model, which is used as the evaluation classifier.

Train VMI

CelebA

  • the script below runs VMI attack on the first 100 IDs and saves the results to results/celeba-id .
run_scripts/neurips2021-celeba-stylegan-flow.sh
  • generate and aggregate the attack samples by running the command below. The results will be saved to results/images_pt/stylegan-attack-with-labels-id0-100.pt.
python generate_vmi_attack_samples.py
  • evaluate the generated samples by running:
fprefix=results/images_pt/stylegan-attack-with-labels-id0-100

python evaluate_samples.py \
	--name load_samples_pt \
	--samples_pt_prefix $fprefix \
	--eval_what stats \
	--nclass 100

Acknowledgements

Code contain snippets from:
https://github.com/adjidieng/PresGANs
https://github.com/pytorch/examples/tree/master/mnist
https://github.com/wyharveychen/CloserLookFewShot

Owner
Jackson Wang
Postdoc at Stanford CS. PhD from UofT and the Vector Institute.
Jackson Wang
PyTorch implementation of EfficientNetV2

[NEW!] Check out our latest work involution accepted to CVPR'21 that introduces a new neural operator, other than convolution and self-attention. PyTo

Duo Li 375 Jan 03, 2023
[2021 MultiMedia] CONQUER: Contextual Query-aware Ranking for Video Corpus Moment Retrieval

CONQUER: Contexutal Query-aware Ranking for Video Corpus Moment Retreival PyTorch implementation of CONQUER: Contexutal Query-aware Ranking for Video

Hou zhijian 23 Dec 26, 2022
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022
Tree Nested PyTorch Tensor Lib

DI-treetensor treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors. Almost all the operation can be supp

OpenDILab 167 Dec 29, 2022
Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer

AdaConv Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-

65 Dec 22, 2022
šŸ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.ā­ā­ā­

šŸ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.ā­ā­ā­

xmu-xiaoma66 7.7k Jan 05, 2023
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 90 Dec 31, 2022
NanoDet-Plusāš”Super fast and lightweight anchor-free object detection model. šŸ”„Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphonešŸ”„

NanoDet-Plusāš”Super fast and lightweight anchor-free object detection model. šŸ”„Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphonešŸ”„

4.8k Jan 07, 2023
The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

ISC21-Descriptor-Track-1st The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track. You can check our solution

lyakaap 75 Jan 08, 2023
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

4 Mar 11, 2022
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation

JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation This the repository for this paper. Find extensions of this w

Zhuoyuan Mao 14 Oct 26, 2022
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet

PyTorch Image Classification Following papers are implemented using PyTorch. ResNet (1512.03385) ResNet-preact (1603.05027) WRN (1605.07146) DenseNet

1.2k Jan 04, 2023
scAR (single-cell Ambient Remover) is a package for data denoising in single-cell omics.

scAR scAR (single cell Ambient Remover) is a package for denoising multiple single cell omics data. It can be used for multiple tasks, such as, sgRNA

19 Nov 28, 2022
Shuffle Attention for MobileNetV3

SA-MobileNetV3 Shuffle Attention for MobileNetV3 Train Run the following command for train model on your own dataset: python train.py --dataset mnist

Sajjad Aemmi 36 Dec 28, 2022
This is an official implementation for "Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation".

Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation This repo is the official implementation of Exploiting Temporal Con

Vegetabird 241 Jan 07, 2023
Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning

Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning Update (September 18th, 2021) A supporting document de

Taimur Hassan 1 Mar 16, 2022
Fully Convolutional DenseNet (A.K.A 100 layer tiramisu) for semantic segmentation of images implemented in TensorFlow.

FC-DenseNet-Tensorflow This is a re-implementation of the 100 layer tiramisu, technically a fully convolutional DenseNet, in TensorFlow (Tiramisu). Th

Hasnain Raza 121 Oct 12, 2022