[CVPR 2021] Pytorch implementation of Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

Overview

Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

Pytorch 1.7.0 cvxpy 1.1.11 tensorflow 1.14

In this work, we propose a framework HijackGAN, which enables non-linear latent space traversal and gain high-level controls, e.g., attributes, head poses, and landmarks, over unconditional image generation GANs in a fully black-box setting. It opens up the possibility of reusing GANs while raising concerns about unintended usage.

[Paper (CVPR 2021)][Project Page]

Prerequisites

Install required packages

pip install -r requirements.txt

Download pretrained GANs

Download the CelebAHQ pretrained weights of ProgressiveGAN [paper][code] and StyleGAN [paper][code], and then put those weights in ./models/pretrain. For example,

pretrain/
├── Pretrained_Models_Should_Be_Placed_Here
├── karras2018iclr-celebahq-1024x1024.pkl
├── karras2019stylegan-celebahq-1024x1024.pkl
├── pggan_celebahq_z.pt
├── stylegan_celebahq_z.pt
├── stylegan_headpose_z_dp.pt
└── stylegan_landmark_z.pt

Quick Start

Specify number of images to edit, a model to generate images, some parameters for editting.

LATENT_CODE_NUM=1
python edit.py \
    -m pggan_celebahq \
    -b boundaries/ \
    -n "$LATENT_CODE_NUM" \
    -o results/stylegan_celebahq_eyeglasses \
    --step_size 0.2 \
    --steps 40 \
    --attr_index 0 \
    --task attribute \
    --method ours

Usage

Important: For different given images (initial points), different step size and steps may be considered. In the following examples, we provide the parameters used in our paper. One could adjust them for better performance.

Specify Number of Samples

LATENT_CODE_NUM=1

Unconditional Modification

python edit.py \
    -m pggan_celebahq \
    -b boundaries/ \
    -n "$LATENT_CODE_NUM" \
    -o results/stylegan_celebahq_smile_editing \
    --step_size 0.2 \
    --steps 40 \
    --attr_index 0\
    --task attribute

Conditional Modification

python edit.py \
    -m pggan_celebahq \
    -b boundaries/ \
    -n "$LATENT_CODE_NUM" \
    -o results/stylegan_celebahq_smile_editing \
    --step_size 0.2 \
    --steps 40 \
    --attr_index 0\
    --condition\
    -i codes/pggan_cond/age.npy
    --task attribute

Head pose

Pitch

python edit.py \
    -m stylegan_celebahq \
    -b boundaries/ \
    -n "$LATENT_CODE_NUM" \
    -o results/ \
    --task head_pose \
    --method ours \
    --step_size 0.01 \
    --steps 2000 \
    --attr_index 1\
    --condition\
    --direction -1 \
    --demo

Yaw

python edit.py \
    -m stylegan_celebahq \
    -b boundaries/ \
    -n "$LATENT_CODE_NUM" \
    -o results/ \
    --task head_pose \
    --method ours \
    --step_size 0.1 \
    --steps 200 \
    --attr_index 0\
    --condition\
    --direction 1\
    --demo

Landmarks

Parameters for reference: (attr_index, step_size, steps) (4: 0.005 400) (5: 0.01 100), (6: 0.1 200), (8 0.1 200)

CUDA_VISIBLE_DEVICES=0 python edit.py \
    -m stylegan_celebahq \
    -b boundaries/ \
    -n "$LATENT_CODE_NUM" \
    -o results/ \
    --task landmark \
    --method ours \
    --step_size 0.1 \
    --steps 200 \
    --attr_index 6\
    --condition\
    --direction 1 \
    --demo

Generate Balanced Data

This a templeate showing how we generated balanced data for attribute manipulation (16 attributes in our internal experiments). You can modify it to fit your task better. Please first refer to here and replace YOUR_TASK_MODEL with your own classification model, and then run:

NUM=500000
CUDA_VISIBLE_DEVICES=0 python generate_balanced_data.py -m stylegan_celebahq \
    -o ./generated_data -K ./generated_data/indices.pkl -n "$NUM" -SI 0 --no_generated_imgs

Evaluations

TO-DO

  • Basic usage
  • Prerequisites
  • How to generate data
  • How to evaluate

Acknowledgment

This code is built upon InterfaceGAN

Owner
Hui-Po Wang
Interested in ML/DL/CV domains. A PhD student at CISPA, Germany.
Hui-Po Wang
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

290 Dec 25, 2022
🔊 Audio and fastai v2

Fastaudio An audio module for fastai v2. We want to help you build audio machine learning applications while minimizing the need for audio domain expe

152 Dec 28, 2022
This repository is for Contrastive Embedding Distribution Refinement and Entropy-Aware Attention Network (CEDR)

CEDR This repository is for Contrastive Embedding Distribution Refinement and Entropy-Aware Attention Network (CEDR) introduced in the following paper

phoenix 3 Feb 27, 2022
Efficient and intelligent interactive segmentation annotation software

Efficient and intelligent interactive segmentation annotation software

294 Dec 30, 2022
Deeper insights into graph convolutional networks for semi-supervised learning

deeper_insights_into_GCNs Deeper insights into graph convolutional networks for semi-supervised learning References data and utils.py come from Implem

Davidham3 17 Dec 16, 2022
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 02, 2022
Code for EMNLP2021 paper "Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training"

VoCapXLM Code for EMNLP2021 paper Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training Environment DockerFile: dancingso

Bo Zheng 15 Jul 28, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
An Implementation of SiameseRPN with Feature Pyramid Networks

SiameseRPN with FPN This project is mainly based on HelloRicky123/Siamese-RPN. What I've done is just add a Feature Pyramid Network method to the orig

3 Apr 16, 2022
LibFewShot: A Comprehensive Library for Few-shot Learning.

LibFewShot Make few-shot learning easy. Supported Methods Meta MAML(ICML'17) ANIL(ICLR'20) R2D2(ICLR'19) Versa(NeurIPS'18) LEO(ICLR'19) MTL(CVPR'19) M

<a href=[email protected]&L"> 603 Jan 05, 2023
To prepare an image processing model to classify the type of disaster based on the image dataset

Disaster Classificiation using CNNs bunnysaini/Disaster-Classificiation Goal To prepare an image processing model to classify the type of disaster bas

Bunny Saini 1 Jan 24, 2022
Ipython notebook presentations for getting starting with basic programming, statistics and machine learning techniques

Data Science 45-min Intros Every week*, our data science team @Gnip (aka @TwitterBoulder) gets together for about 50 minutes to learn something. While

Scott Hendrickson 1.6k Dec 31, 2022
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain

SatelliteNeRF PyTorch-based Neural Radiance Fields adapted to satellite domain.

Kai Zhang 46 Nov 20, 2022
Riemannian Convex Potential Maps

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited b

Facebook Research 61 Nov 28, 2022
Text completion with Hugging Face and TensorFlow.js running on Node.js

Katana ML Text Completion 🤗 Description Runs with with Hugging Face DistilBERT and TensorFlow.js on Node.js distilbert-model - converter from Hugging

Katana ML 2 Nov 04, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch 서강대학교 현대모비스 SW 프로그램에서 진행한 인공지능 프로젝트입니다. Jetson nano를 이용해 pre-trained network를 fine tuning시켜 차량 및 신호등 인식을 구현하였습니다

Haneul Kim 1 Nov 16, 2021
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 08, 2023
Styled Augmented Translation

SAT Style Augmented Translation Introduction By collecting high-quality data, we were able to train a model that outperforms Google Translate on 6 dif

139 Dec 29, 2022
Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets

Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets (including obl

Azavea 1.7k Dec 22, 2022