Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

Overview

HifiFace — Unofficial Pytorch Implementation

Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

issueBadge starBadge repoSize lastCommit

This repository is an unofficial implementation of the face swapping model proposed by Wang et. al in their paper HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping. This implementation makes use of the Pytorch Lighting library, a light-weight wrapper for PyTorch.

HifiFace Overview

The task of face swapping applies the face and the identity of the source person to the head of the target.

The HifiFace architecture can be broken up into three primary structures. The 3D shape-aware identity extractor, the semantic facial fusion module, and an encoder-decoder structure. A high-level overview of the architecture can be seen in the image below.

Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 2, pg. 3)

Changes from the original paper

Dataset

In the paper, the author used VGGFace2 and Asian-Celeb as the training dataset. Unfortunately, the Asian-Celeb dataset can only be accessed with a Baidu account, which we do not have. Thus, we only use VGGFace2 for our training dateset.

Model

The paper proposes two versions of HifiFace model based on the output image size: 256x256 and 512x512 (referred to as Ours-256 and Ours-512 in the paper). The 512x512 model uses an extra data preprocessing before training. In this open source project, we implement the 256x256 model. For the discriminator, the original paperuses the discriminator from StarGAN v2. Our implementation uses the multi-scale discriminator from SPADE.

Installation

Build Docker Image

git clone https://github.com/mindslab-ai/hififace 
cd hififace
git clone https://github.com/sicxu/Deep3DFaceRecon_pytorch && git clone https://github.com/NVlabs/nvdiffrast && git clone https://github.com/deepinsight/insightface.git
cp -r insightface/recognition/arcface_torch/ Deep3DFaceRecon_pytorch/models/
cp -r insightface/recognition/arcface_torch/ ./model/
rm -rf insightface
cp -rf 3DMM/* Deep3DFaceRecon_pytorch
mv Deep3DFaceRecon_pytorch model/
rm -rf 3DMM
docker build -t hififace:latent .
rm -rf nvdiffrast

This Dockerfile was inspired by @yuzhou164, this issue from Deep3DFaceRecon_pytorch.

Pre-Trained Model for Deep3DFace PyTorch

Follow the guideline in Prepare prerequisite models

Set up at ./mode/Deep3DFaceRecon_pytorch/

Pre-Trained Models for ArcFace

We used official Arcface per-trained pytorch implementation Download pre-trained checkpoint from onedrive (IResNet-100 trained on MS1MV3)

Download HifiFace Pre-Trained Model

google drive link trained on VGGFace2, 300K iterations

Training

Dataset & Preprocessing

Align & Crop

We aligned the face images with the landmark extracted by 3DDFA_V2. The code will be added.

Face Segmentation Map

After finishing aligning the face images, you need to get the face segmentation map for each face images. We used face segmentation model that PSFRGAN provides. You can use their code and pre-trained model.

Dataset Folder Structure

Each face image and the corresponding segmentation map should have the same name and the same relative path from the top-level directory.

face_image_dataset_folder
└───identity1
│   │   image1.png
│   │   image2.png
│   │   ...
│   
└───identity2
│   │   image1.png
│   │   image2.png
│   │   ...
│ 
|   ...

face_segmentation_mask_folder
└───identity1
│   │   image1.png
│   │   image2.png
│   │   ...
│   
└───identity2
│   │   image1.png
│   │   image2.png
│   │   ...
│ 
|   ...

Wandb

Wandb is a powerful tool to manage your model training. Please make a wandb account and a wandb project for training HifiFace with our training code.

Changing the Configuration

  • config/model.yaml

    • dataset.train.params.image_root: directory path to the training dataset images
    • dataset.train.params.parsing_root: directory path to the training dataset parsing images
    • dataset.validation.params.image_root: directory path to the validation dataset images
    • dataset.validation.params.parsing_root: directory path to the validation dataset parsing images
  • config/trainer.yaml

    • checkpoint.save_dir: directory where the checkpoints will be saved
    • wandb: fill out your wandb entity and project name

Run Docker Container

docker run -it --ipc host --gpus all -v /PATH_TO/hififace:/workspace -v /PATH_TO/DATASET/FOLDER:/DATA --name hififace hififace:latent

Run Training Code

python hififace_trainer.py --model_config config/model.yaml --train_config config/trainer.yaml -n hififace

Inference

Single Image

python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result.png

All Posible Pairs of Images in Directory

python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt  --input_directory_path asset/inference_sample --output_image_path ./result.png

Interpolation

# interpolates both the identity and the 3D shape.
python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result_all.gif  --interpolation_all 

# interpolates only the identity.
python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result_identity.gif  --interpolation_identity

# interpolates only the 3D shape.
python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result_3d.gif  --interpolation_3d

Our Results

The results from our pre-trained model.

GIF interpolaiton results from Obama to Trump to Biden back to Obama. The left image interpolates both the identity and the 3D shape. The middle image interpolates only the identity. The right image interpolates only the 3D shape.

To-Do List

  • Pre-processing Code
  • Colab Notebook

License

BSD 3-Clause License.

Implementation Author

Changho Choi @ MINDs Lab, Inc. ([email protected])

Matthew B. Webster @ MINDs Lab, Inc. ([email protected])

Citations

@article{DBLP:journals/corr/abs-2106-09965,
  author    = {Yuhan Wang and
               Xu Chen and
               Junwei Zhu and
               Wenqing Chu and
               Ying Tai and
               Chengjie Wang and
               Jilin Li and
               Yongjian Wu and
               Feiyue Huang and
               Rongrong Ji},
  title     = {HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping},
  journal   = {CoRR},
  volume    = {abs/2106.09965},
  year      = {2021}
}
Owner
MINDs Lab
MINDsLab provides AI platform and various AI engines based on deep machine learning.
MINDs Lab
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

52 Nov 20, 2022
Classifying cat and dog images using Kaggle dataset

PyTorch Image Classification Classifies an image as containing either a dog or a cat (using Kaggle's public dataset), but could easily be extended to

Robert Coleman 74 Nov 22, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Public repo for the ICCV2021-CVAMD paper "Is it Time to Replace CNNs with Transformers for Medical Images?"

Is it Time to Replace CNNs with Transformers for Medical Images? Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (C

Christos Matsoukas 80 Dec 27, 2022
E2e music remastering system - End-to-end Music Remastering System Using Self-supervised and Adversarial Training

End-to-end Music Remastering System This repository includes source code and pre

Junghyun (Tony) Koo 37 Dec 15, 2022
GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors

GPU implementation of kNN and SNN GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors Supported by numba cuda and faiss library E

Hyeon Jeon 7 Nov 23, 2022
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Qingshan Xu 118 Jan 04, 2023
classify fashion-mnist dataset with pytorch

Fashion-Mnist Classifier with PyTorch Inference 1- clone this repository: git clone https://github.com/Jhamed7/Fashion-Mnist-Classifier.git 2- Instal

1 Jan 14, 2022
Projects of Andfun Yangon

AndFunYangon Projects of Andfun Yangon First Commit We can use gsearch.py to sea

Htin Aung Lu 1 Dec 28, 2021
This is the offical website for paper ''Category-consistent deep network learning for accurate vehicle logo recognition''

The Pytorch Implementation of Category-consistent deep network learning for accurate vehicle logo recognition This is the offical website for paper ''

Wanglong Lu 28 Oct 29, 2022
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF 🤮 : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars This repository is the official implementation of Colar. In this work,

LeYang 246 Dec 13, 2022
MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios This is the official TensorFlow implementation of MetaTTE in the

morningstarwang 4 Dec 14, 2022
Code for project: "Learning to Minimize Remainder in Supervised Learning".

Learning to Minimize Remainder in Supervised Learning Code for project: "Learning to Minimize Remainder in Supervised Learning". Requirements and Envi

Yan Luo 0 Jul 18, 2021
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Use evolutionary algorithms instead of gridsearch in scikit-learn

sklearn-deap Use evolutionary algorithms instead of gridsearch in scikit-learn. This allows you to reduce the time required to find the best parameter

rsteca 709 Jan 03, 2023
Data and Code for paper Outlining and Filling: Hierarchical Query Graph Generation for Answering Complex Questions over Knowledge Graph is available for research purposes.

Data and Code for paper Outlining and Filling: Hierarchical Query Graph Generation for Answering Complex Questions over Knowledge Graph is available f

Yongrui Chen 5 Nov 10, 2022
EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures

SCICAP: Scientific Figures Dataset This is the Github repo of the EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures (Hsu

Edward 26 Nov 21, 2022
BMVC 2021: This is the github repository for "Few Shot Temporal Action Localization using Query Adaptive Transformers" accepted in British Machine Vision Conference (BMVC) 2021, Virtual

FS-QAT: Few Shot Temporal Action Localization using Query Adaptive Transformer Accepted as Poster in BMVC 2021 This is an official implementation in P

Sauradip Nag 14 Dec 09, 2022