Facial Image Inpainting with Semantic Control

Overview

Facial Image Inpainting with Semantic Control

In this repo, we provide a model for the controllable facial image inpainting task. This model enables users to intuitively edit their images by using parametric 3D faces.

The technology report is comming soon.

  • Image Inpainting results

  • Fine-grained Control

Quick Start

Installation

  • Clone the repository and set up a conda environment with all dependencies as follows
git clone https://github.com/RenYurui/Controllable-Face-Inpainting.git --recursive
cd Controllable-Face-Inpainting

# 1. Create a conda virtual environment.
conda create -n cfi python=3.6
source activate cfi
conda install -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2

# 2. install pytorch3d
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d && pip install -e .

# 3. Install other dependencies
pip install -r requirements.txt

Download Prerequisite Models

  • Follow Deep3DFaceRecon to prepare ./BFM folder. Download 01_MorphableModel.mat and Expression Basis Exp_Pca.bin. Put the obtained files into the ./Deep3DFaceRecon_pytorch/BFM floder. Then link the folder to the root path.
ln -s /PATH_TO_REPO_ROOT/Deep3DFaceRecon_pytorch/BFM /PATH_TO_REPO_ROOT
  • Clone the Arcface repo
cd third_part
git clone https://github.com/deepinsight/insightface.git
cp -r ./insightface/recognition/arcface_torch/ ./

The Arcface is used to extract identity features for loss computation. Download the pre-trained model from Arcface using this link. By default, the resnet50 backbone (ms1mv3_arcface_r50_fp16) is used. Put the obtained weights into ./third_part/arcface_torch/ms1mv3_arcface_r50_fp16/backbone.pth

  • Download the pretrained weights of our model from Google Driven. Save the obtained files into folder ./result.

Inference

We provide some example images. Please run the following code for inference

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 1234 demo.py \
--config ./config/facial_image_renderer_ffhq.yaml \
--name facial_image_renderer_ffhq \
--output_dir ./visi_result \
--input_dir ./examples/inputs \
--mask_dir ./examples/masks

Train the model from scratch

Dataset Preparation

  • Download dataset. We use Celeba-HQ and FFHQ for training and inference. Please download the datasets (image format) and put them under ./dataset folder.
  • Obtain 3D faces by using Deep3DFaceRecon. Follow the Deep3DFaceRecon repo to download the trained weights. And save it as: ./Deep3DFaceRecon_pytorch/checkpoints/face_recon/epoch_20.pth
# 1. Extract keypoints from the face images for cropping.
cd scripts
# extracted keypoints from celeba
python extract_kp.py \
--data_root PATH_TO_CELEBA_ROOT \
--output_dir PATH_TO_KEYPOINTS \
--dataset celeba \
--device_ids 0,1 \
--workers 6

# 2. Extract 3DMM coefficients from the face images.
cd .. #repo root
# we provide some scripts for easy of use. However, one can use the original repo to extract the coefficients.
cp scripts/inference_options.py ./Deep3DFaceRecon_pytorch/options
cp scripts/face_recon.py ./Deep3DFaceRecon_pytorch
cp scripts/facerecon_inference_model.py ./Deep3DFaceRecon_pytorch/models
cp scripts/pytorch_3d.py ./Deep3DFaceRecon_pytorch/util
ln -s /PATH_TO_REPO_ROOT/third_part/arcface_torch /PATH_TO_REPO_ROOT/Deep3DFaceRecon_pytorch/models

cd Deep3DFaceRecon_pytorch

python face_recon.py \
--input_dir PATH_TO_CELEBA_ROOT \
--keypoint_dir PATH_TO_KEYPOINTS \
--output_dir PATH_TO_3DMM_COEFFICIENT \
--inference_batch_size 100 \
--name=face_recon \
--dataset_name celeba \
--epoch=20 \
--model facerecon_inference

# 3. Save images and the coefficients into a lmdb file.
cd .. #repo root
python prepare_data.py \
--root PATH_TO_CELEBA_ROOT \
--coeff_file PATH_TO_3DMM_COEFFICIENT \
--dataset celeba \
--out PATH_TO_CELEBA_LMDB_ROOT

Train The Model

# we first train the semantic_descriptor_recommender
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port 1234 train.py \
--config ./config/semantic_descriptor_recommender_celeba.yaml \
--name semantic_descriptor_recommender_celeba

# Then, we trian the facial_image_renderer for image inpainting
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port 1234 train.py \
--config ./config/facial_image_renderer_celeba.yaml \
--name facial_image_renderer_celeba
Owner
Ren Yurui
Ren Yurui
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
JAXDL: JAX (Flax) Deep Learning Library

JAXDL: JAX (Flax) Deep Learning Library Simple and clean JAX/Flax deep learning algorithm implementations: Soft-Actor-Critic (arXiv:1812.05905) Transf

Patrick Hart 4 Nov 27, 2022
A light-weight image labelling tool for Python designed for creating segmentation data sets.

An image labelling tool for creating segmentation data sets, for Django and Flask.

117 Nov 21, 2022
Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization

Hybrid solving process for combinatorial optimization problems Combinatorial optimization has found applications in numerous fields, from aerospace to

117 Dec 13, 2022
Database Reasoning Over Text project for ACL paper

Database Reasoning over Text This repository contains the code for the Database Reasoning Over Text paper, to appear at ACL2021. Work is performed in

Facebook Research 320 Dec 12, 2022
A PyTorch based deep learning library for drug pair scoring.

Documentation | External Resources | Datasets | Examples ChemicalX is a deep learning library for drug-drug interaction, polypharmacy side effect and

AstraZeneca 597 Dec 30, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
The PyTorch implementation for paper "Neural Texture Extraction and Distribution for Controllable Person Image Synthesis" (CVPR2022 Oral)

ArXiv | Get Start Neural-Texture-Extraction-Distribution The PyTorch implementation for our paper "Neural Texture Extraction and Distribution for Cont

Ren Yurui 111 Dec 10, 2022
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
PyTorch implementation of Octave Convolution with pre-trained Oct-ResNet and Oct-MobileNet models

octconv.pytorch PyTorch implementation of Octave Convolution in Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octa

Duo Li 273 Dec 18, 2022
Official implementation of EfficientPose

EfficientPose This is the official implementation of EfficientPose. We based our work on the Keras EfficientDet implementation xuannianz/EfficientDet

2 May 17, 2022
Train DeepLab for Semantic Image Segmentation

Train DeepLab for Semantic Image Segmentation Martin Kersner, [email protected]

Martin Kersner 172 Dec 14, 2022
Neural Articulated Radiance Field

Neural Articulated Radiance Field NARF Neural Articulated Radiance Field Atsuhiro Noguchi, Xiao Sun, Stephen Lin, Tatsuya Harada ICCV 2021 [Paper] [Co

Atsuhiro Noguchi 144 Jan 03, 2023
Read and write layered TIFF ImageSourceData and ImageResources tags

Read and write layered TIFF ImageSourceData and ImageResources tags Psdtags is a Python library to read and write the Adobe Photoshop(r) specific Imag

Christoph Gohlke 4 Feb 05, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023
Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning

T2I_CL This is the official Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning Requirements Linux Python

42 Dec 31, 2022
9th place solution

AllDataAreExt-Galixir-Kaggle-HPA-2021-Solution Team Members Qishen Ha is Master of Engineering from the University of Tokyo. Machine Learning Engineer

daishu 5 Nov 18, 2021
Use stochastic processes to generate samples and use them to train a fully-connected neural network based on Keras

Use stochastic processes to generate samples and use them to train a fully-connected neural network based on Keras which will then be used to generate residuals

Federico Lopez 2 Jan 14, 2022
DiffStride: Learning strides in convolutional neural networks

DiffStride is a pooling layer with learnable strides. Unlike strided convolutions, average pooling or max-pooling that require cross-validating stride values at each layer, DiffStride can be initiali

Google Research 113 Dec 13, 2022
[ICCV21] Official implementation of the "Social NCE: Contrastive Learning of Socially-aware Motion Representations" in PyTorch.

Social-NCE + CrowdNav Website | Paper | Video | Social NCE + Trajectron | Social NCE + STGCNN This is an official implementation for Social NCE: Contr

VITA lab at EPFL 125 Dec 23, 2022