[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

Overview

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

Python 3.7 pytorch 1.1.0 TensorFlow 1.12.2 sklearn 0.21.2

image Figure: High-quality facial attributes editing results with InterFaceGAN.

In this repository, we propose an approach, termed as InterFaceGAN, for semantic face editing. Specifically, InterFaceGAN is capable of turning an unconditionally trained face synthesis model to controllable GAN by interpreting the very first latent space and finding the hidden semantic subspaces.

[Paper (CVPR)] [Paper (TPAMI)] [Project Page] [Demo] [Colab]

How to Use

Pick up a model, pick up a boundary, pick up a latent code, and then EDIT!

# Before running the following code, please first download
# the pre-trained ProgressiveGAN model on CelebA-HQ dataset,
# and then place it under the folder ".models/pretrain/".
LATENT_CODE_NUM=10
python edit.py \
    -m pggan_celebahq \
    -b boundaries/pggan_celebahq_smile_boundary.npy \
    -n "$LATENT_CODE_NUM" \
    -o results/pggan_celebahq_smile_editing

GAN Models Used (Prior Work)

Before going into details, we would like to first introduce the two state-of-the-art GAN models used in this work, which are ProgressiveGAN (Karras el al., ICLR 2018) and StyleGAN (Karras et al., CVPR 2019). These two models achieve high-quality face synthesis by learning unconditional GANs. For more details about these two models, please refer to the original papers, as well as the official implementations.

ProgressiveGAN: [Paper] [Code]

StyleGAN: [Paper] [Code]

Code Instruction

Generative Models

A GAN-based generative model basically maps the latent codes (commonly sampled from high-dimensional latent space, such as standart normal distribution) to photo-realistic images. Accordingly, a base class for generator, called BaseGenerator, is defined in models/base_generator.py. Basically, it should contains following member functions:

  • build(): Build a pytorch module.
  • load(): Load pre-trained weights.
  • convert_tf_model() (Optional): Convert pre-trained weights from tensorflow model.
  • sample(): Randomly sample latent codes. This function should specify what kind of distribution the latent code is subject to.
  • preprocess(): Function to preprocess the latent codes before feeding it into the generator.
  • synthesize(): Run the model to get synthesized results (or any other intermediate outputs).
  • postprocess(): Function to postprocess the outputs from generator to convert them to images.

We have already provided following models in this repository:

  • ProgressiveGAN:
    • A clone of official tensorflow implementation: models/pggan_tf_official/. This clone is only used for converting tensorflow pre-trained weights to pytorch ones. This conversion will be done automitally when the model is used for the first time. After that, tensorflow version is not used anymore.
    • Pytorch implementation of official model (just for inference): models/pggan_generator_model.py.
    • Generator class derived from BaseGenerator: models/pggan_generator.py.
    • Please download the official released model trained on CelebA-HQ dataset and place it in folder models/pretrain/.
  • StyleGAN:
    • A clone of official tensorflow implementation: models/stylegan_tf_official/. This clone is only used for converting tensorflow pre-trained weights to pytorch ones. This conversion will be done automitally when the model is used for the first time. After that, tensorflow version is not used anymore.
    • Pytorch implementation of official model (just for inference): models/stylegan_generator_model.py.
    • Generator class derived from BaseGenerator: models/stylegan_generator.py.
    • Please download the official released models trained on CelebA-HQ dataset and FF-HQ dataset and place them in folder models/pretrain/.
    • Support synthesizing images from $\mathcal{Z}$ space, $\mathcal{W}$ space, and extended $\mathcal{W}$ space (18x512).
    • Set truncation trick and noise randomization trick in models/model_settings.py. Among them, STYLEGAN_RANDOMIZE_NOISE is highly recommended to set as False. STYLEGAN_TRUNCATION_PSI = 0.7 and STYLEGAN_TRUNCATION_LAYERS = 8 are inherited from official implementation. Users can customize their own models. NOTE: These three settings will NOT affect the pre-trained weights.
  • Customized model:
    • Users can do experiments with their own models by easily deriving new class from BaseGenerator.
    • Before used, new model should be first registered in MODEL_POOL in file models/model_settings.py.

Utility Functions

We provide following utility functions in utils/manipulator.py to make InterFaceGAN much easier to use.

  • train_boundary(): This function can be used for boundary searching. It takes pre-prepared latent codes and the corresponding attributes scores as inputs, and then outputs the normal direction of the separation boundary. Basically, this goal is achieved by training a linear SVM. The returned vector can be further used for semantic face editing.
  • project_boundary(): This function can be used for conditional manipulation. It takes a primal direction and other conditional directions as inputs, and then outputs a new normalized direction. Moving latent code along this new direction will manipulate the primal attribute yet barely affect the conditioned attributes. NOTE: For now, at most two conditions are supported.
  • linear_interpolate(): This function can be used for semantic face editing. It takes a latent code and the normal direction of a particular semantic boundary as inputs, and then outputs a collection of manipulated latent codes with linear interpolation. These interpolation can be used to see how the synthesis will vary if moving the latent code along the given direction.

Tools

  • generate_data.py: This script can be used for data preparation. It will generate a collection of syntheses (images are saved for further attribute prediction) as well as save the input latent codes.

  • train_boundary.py: This script can be used for boundary searching.

  • edit.py: This script can be usd for semantic face editing.

Usage

We take ProgressiveGAN model trained on CelebA-HQ dataset as an instance.

Prepare data

NUM=10000
python generate_data.py -m pggan_celebahq -o data/pggan_celebahq -n "$NUM"

Predict Attribute Score

Get your own predictor for attribute $ATTRIBUTE_NAME, evaluate on all generated images, and save the inference results as data/pggan_celebahq/"$ATTRIBUTE_NAME"_scores.npy. NOTE: The save results should be with shape ($NUM, 1).

Search Semantic Boundary

python train_boundary.py \
    -o boundaries/pggan_celebahq_"$ATTRIBUTE_NAME" \
    -c data/pggan_celebahq/z.npy \
    -s data/pggan_celebahq/"$ATTRIBUTE_NAME"_scores.npy

Compute Conditional Boundary (Optional)

This step is optional. It depends on whether conditional manipulation is needed. Users can use function project_boundary() in file utils/manipulator.py to compute the projected direction.

Boundaries Description

We provided following boundaries in folder boundaries/. The boundaries can be more accurate if stronger attribute predictor is used.

  • ProgressiveGAN model trained on CelebA-HQ dataset:

    • Single boundary:
      • pggan_celebahq_pose_boundary.npy: Pose.
      • pggan_celebahq_smile_boundary.npy: Smile (expression).
      • pggan_celebahq_age_boundary.npy: Age.
      • pggan_celebahq_gender_boundary.npy: Gender.
      • pggan_celebahq_eyeglasses_boundary.npy: Eyeglasses.
      • pggan_celebahq_quality_boundary.npy: Image quality.
    • Conditional boundary:
      • pggan_celebahq_age_c_gender_boundary.npy: Age (conditioned on gender).
      • pggan_celebahq_age_c_eyeglasses_boundary.npy: Age (conditioned on eyeglasses).
      • pggan_celebahq_age_c_gender_eyeglasses_boundary.npy: Age (conditioned on gender and eyeglasses).
      • pggan_celebahq_gender_c_age_boundary.npy: Gender (conditioned on age).
      • pggan_celebahq_gender_c_eyeglasses_boundary.npy: Gender (conditioned on eyeglasses).
      • pggan_celebahq_gender_c_age_eyeglasses_boundary.npy: Gender (conditioned on age and eyeglasses).
      • pggan_celebahq_eyeglasses_c_age_boundary.npy: Eyeglasses (conditioned on age).
      • pggan_celebahq_eyeglasses_c_gender_boundary.npy: Eyeglasses (conditioned on gender).
      • pggan_celebahq_eyeglasses_c_age_gender_boundary.npy: Eyeglasses (conditioned on age and gender).
  • StyleGAN model trained on CelebA-HQ dataset:

    • Single boundary in $\mathcal{Z}$ space:
      • stylegan_celebahq_pose_boundary.npy: Pose.
      • stylegan_celebahq_smile_boundary.npy: Smile (expression).
      • stylegan_celebahq_age_boundary.npy: Age.
      • stylegan_celebahq_gender_boundary.npy: Gender.
      • stylegan_celebahq_eyeglasses_boundary.npy: Eyeglasses.
    • Single boundary in $\mathcal{W}$ space:
      • stylegan_celebahq_pose_w_boundary.npy: Pose.
      • stylegan_celebahq_smile_w_boundary.npy: Smile (expression).
      • stylegan_celebahq_age_w_boundary.npy: Age.
      • stylegan_celebahq_gender_w_boundary.npy: Gender.
      • stylegan_celebahq_eyeglasses_w_boundary.npy: Eyeglasses.
  • StyleGAN model trained on FF-HQ dataset:

    • Single boundary in $\mathcal{Z}$ space:
      • stylegan_ffhq_pose_boundary.npy: Pose.
      • stylegan_ffhq_smile_boundary.npy: Smile (expression).
      • stylegan_ffhq_age_boundary.npy: Age.
      • stylegan_ffhq_gender_boundary.npy: Gender.
      • stylegan_ffhq_eyeglasses_boundary.npy: Eyeglasses.
    • Conditional boundary in $\mathcal{Z}$ space:
      • stylegan_ffhq_age_c_gender_boundary.npy: Age (conditioned on gender).
      • stylegan_ffhq_age_c_eyeglasses_boundary.npy: Age (conditioned on eyeglasses).
      • stylegan_ffhq_eyeglasses_c_age_boundary.npy: Eyeglasses (conditioned on age).
      • stylegan_ffhq_eyeglasses_c_gender_boundary.npy: Eyeglasses (conditioned on gender).
    • Single boundary in $\mathcal{W}$ space:
      • stylegan_ffhq_pose_w_boundary.npy: Pose.
      • stylegan_ffhq_smile_w_boundary.npy: Smile (expression).
      • stylegan_ffhq_age_w_boundary.npy: Age.
      • stylegan_ffhq_gender_w_boundary.npy: Gender.
      • stylegan_ffhq_eyeglasses_w_boundary.npy: Eyeglasses.

BibTeX

@inproceedings{shen2020interpreting,
  title     = {Interpreting the Latent Space of GANs for Semantic Face Editing},
  author    = {Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei},
  booktitle = {CVPR},
  year      = {2020}
}
@article{shen2020interfacegan,
  title   = {InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs},
  author  = {Shen, Yujun and Yang, Ceyuan and Tang, Xiaoou and Zhou, Bolei},
  journal = {TPAMI},
  year    = {2020}
}
Owner
GenForce: May Generative Force Be with You
Research on Generative Modeling in Zhou Group
GenForce: May Generative Force Be with You
A fast MoE impl for PyTorch

An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.

Rick Ho 873 Jan 09, 2023
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

2 Nov 07, 2022
This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

AdapterHub 18 Dec 09, 2022
CAST: Character labeling in Animation using Self-supervision by Tracking

CAST: Character labeling in Animation using Self-supervision by Tracking (Published as a conference paper at EuroGraphics 2022) Note: The CAST paper c

15 Nov 18, 2022
PolyGlot, a fuzzing framework for language processors

PolyGlot, a fuzzing framework for language processors Build We tested PolyGlot on Ubuntu 18.04. Get the source code: git clone https://github.com/s3te

Software Systems Security Team at Penn State University 79 Dec 27, 2022
An example showing how to use jax to train resnet50 on multi-node multi-GPU

jax-multi-gpu-resnet50-example This repo shows how to use jax for multi-node multi-GPU training. The example is adapted from the resnet50 example in d

Yangzihao Wang 20 Jul 04, 2022
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

79 Dec 27, 2022
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

This is the official implementation of the following paper: Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau. PICARD - Parsing Incrementally for Con

ElementAI 217 Jan 01, 2023
Run Keras models in the browser, with GPU support using WebGL

**This project is no longer active. Please check out TensorFlow.js.** The Keras.js demos still work but is no longer updated. Run Keras models in the

Leon Chen 4.9k Dec 29, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. Download $ git clone http

26 Dec 13, 2022
Training Structured Neural Networks Through Manifold Identification and Variance Reduction

Training Structured Neural Networks Through Manifold Identification and Variance Reduction This repository is a pytorch implementation of the Regulari

0 Dec 23, 2021
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation Created by Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas from Sta

Charles R. Qi 4k Dec 30, 2022
This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting

1 MAGNN This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 12 Nov 08, 2022
Robot Reinforcement Learning on the Constraint Manifold

Implementation of "Robot Reinforcement Learning on the Constraint Manifold"

31 Dec 05, 2022
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

ShopRunner 97 Jan 03, 2023
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Contour-guided image completion with perceptual grouping (BMVC 2021 publication)

Contour-guided Image Completion with Perceptual Grouping Authors Morteza Rezanejad*, Sidharth Gupta*, Chandra Gummaluru, Ryan Marten, John Wilder, Mic

Sid Gupta 6 Dec 27, 2022
My implementation of DeepMind's Perceiver

DeepMind Perceiver (in PyTorch) Disclaimer: This is not official and I'm not affiliated with DeepMind. My implementation of the Perceiver: General Per

Louis Arge 55 Dec 12, 2022