HomoInterpGAN - Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation

Overview

HomoInterpGAN

Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral)

Installation

The implementation is based on pytorch. Our model is trained and tested on version 1.0.1.post2. Please install relevant packages based on your own environment.

All other required packages are listed in "requirements.txt". Please run

pip install -r requirements.txt

to install these packages.

Dataset

Download the "Align&Cropped Images" of the CelebA dataset. If the original link is unavailable, you can also download it here.

Training

Firstly, cd to the project directory and run

export PYTHONPATH=./:$PYTHONPATH

before executing any script.

To train a model on CelebA, please run

python run.py train --data_dir CELEBA_ALIGNED_DIR -sp checkpoints/CelebA -bs 128 -gpu 0,1,2,3 

Key arguments

--data_dir: The path of the celeba_aligned images. 
-sp: The trained model and logs, intermediate results are stored in this directory.
-bs: Batch size.
-gpu: The GPU index.
--attr: This specifies the target attributes. Note that we concatenate multiple attributes defined in CelebA as our grouped attribute. We use "@" to group multiple multiple attributes to a grouped one (e.g., [email protected] forms a "expression" attriute). We use "," to split different grouped attributes. See the default argument of "run.py" for details. 

Testing

python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling  --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8

This conducts attribute manipulation with reference samples selected in CelebA dataset. The reference samples are selected based on their attributes (--filter_target_attr), and the interpolation path should be chosen accordingly.

Key arguments:

1, the effect is exaggerated. -bs: the batch size of the testing images. -n_ref: the number of images used as reference. ">
-mp: the model path. The checkpoints of encoder, interpolator and decoder should be stored in this path.
-sp: the save path of the results.
--filter_target_attr: This specifies the attributes of the reference images. The attribute names can be found in "info/attribute_names.txt". We can specify one attribute (e.g., "Smiling") or several attributes (e.g., "[email protected]_Slightly_Open" will filter mouth open smiling reference images). To filter negative samples, add "NOT" as prefix to the attribute names, such as "NOTSmiling", "[email protected]_Slightly_Open".
--branch_idx: This specifies the branch index of the interpolator. Each branch handles a group of attribute. Note that the physical meaning of each branch is specified by "--attr" during testing. 
-s: The strength of the manipulation. Range of [0, 2] is suggested. If s>1, the effect is exaggerated.
-bs: the batch size of the testing images. 
-n_ref: the number of images used as reference. 

Testing on unaligned images

Note the the performance could degenerate if the testing image is not well aligned. Thus we also provide a tool for face alignment. Please place all your testing images to a folder (e.g., examples/original), then run

python facealign/align_all.py examples/original examples/aligned

to align testing images to an samples in CelebA. Then you can run manipulation by

python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling  --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8 --test_folder examples/aligned

Note that an additional argument "--test_folder" is specified.

Pretrained model

We have also provided a pretrained model here. It is trained with default parameters. The meaning of each branch of the interpolator is listed bellow.

Branch index Grouped attribute Corresponding labels on CelebA
1 Expression Mouth_Slightly_Open, Smiling
2 Gender trait Male, No_Beard, Mustache, Goatee, Sideburns
3 Hair color Black_Hair, Blond_Hair, Brown_Hair, Gray_Hair
4 Hair style Bald, Receding_Hairline, Bangs
5 Age Young

Updates

  • Jun 17, 2019: It is observed that the face alignment tool is not perfect, and the results of "Testing on unaligned images" does not perform as well as results in CelebA dataset. To make the model less sensitive of the alignment issue, we add random shifting in center_crop during training. The shifting range can be controlled by "--random_crop_bias". We have updated the pretarined model by fine-tuning it with "random_crop_bias=10", which leads to better results in unaligned images.

Reference

Ying-Cong Chen, Xiaogang Xu, Zhuotao Tian, Jiaya Jia, "Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation" , Computer Vision and Pattern Recognition (CVPR), 2019 PDF

@inproceedings{chen2019Homomorphic,
  title={Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation},
  author={Chen, Ying-Cong and Xu, Xiaogang and Tian, Zhuotao and Jia, Jiaya},
  booktitle={CVPR},
  year={2019}
}

Contect

Please contact [email protected] if you have any question or suggestion.

Owner
Ying-Cong Chen
Ying-Cong Chen
Pytorch implementation AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

AttnGAN Pytorch implementation for reproducing AttnGAN results in the paper AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative

Tao Xu 1.2k Dec 26, 2022
Pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021).

Pytorch code for SS-Net This is a pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021). Environment Code is tested

Sun Ran 1 May 18, 2022
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
Creating Artificial Life with Reinforcement Learning

Although Evolutionary Algorithms have shown to result in interesting behavior, they focus on learning across generations whereas behavior could also be learned during ones lifetime.

Maarten Grootendorst 49 Dec 21, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

184 Dec 27, 2022
Mask-invariant Face Recognition through Template-level Knowledge Distillation

Mask-invariant Face Recognition through Template-level Knowledge Distillation This is the official repository of "Mask-invariant Face Recognition thro

Fadi Boutros 35 Dec 06, 2022
[ICCV2021] Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

Xuanchi Ren 44 Dec 03, 2022
A smaller subset of 10 easily classified classes from Imagenet, and a little more French

Imagenette 🎶 Imagenette, gentille imagenette, Imagenette, je te plumerai. 🎶 (Imagenette theme song thanks to Samuel Finlayson) NB: Versions of Image

fast.ai 718 Jan 01, 2023
An Implementation of Fully Convolutional Networks in Tensorflow.

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

Marvin Teichmann 1.1k Dec 12, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Repository for self-supervised landmark discovery

self-supervised-landmarks Repository for self-supervised landmark discovery Requirements pytorch pynrrd (for 3d images) Usage The use of this models i

Riddhish Bhalodia 2 Apr 18, 2022
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"

Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection PyTorch code release of the paper "Attentive Prototypes for Sour

Deepti Hegde 23 Oct 17, 2022
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 04, 2023
Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On

Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On [Project website] [Dataset] [Video] Abstract We propose a new g

71 Dec 24, 2022
3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces (ICCV 2021)

3DIAS_Pytorch This repository contains the official code to reproduce the results from the paper: 3DIAS: 3D Shape Reconstruction with Implicit Algebra

Mohsen Yavartanoo 21 Dec 12, 2022
A transformer model to predict pathogenic mutations

MutFormer MutFormer is an application of the BERT (Bidirectional Encoder Representations from Transformers) NLP (Natural Language Processing) model wi

Wang Genomics Lab 2 Nov 29, 2022
Revealing and Protecting Labels in Distributed Training

Revealing and Protecting Labels in Distributed Training

Google Interns 0 Nov 09, 2022
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
Evaluating Cross-lingual Sentence Representations

XNLI: The Cross-Lingual NLI Corpus XNLI is an evaluation corpus for language transfer and cross-lingual sentence classification in 15 languages. New:

Meta Research 395 Dec 19, 2022