Instant Real-Time Example-Based Style Transfer to Facial Videos

Related tags

Deep LearningFaceBlit
Overview

FaceBlit: Instant Real-Time Example-Based Style Transfer to Facial Videos

The official implementation of

FaceBlit: Instant Real-Time Example-Based Style Transfer to Facial Videos
A. Texler, O. Texler, M. Kučera, M. Chai, and D. Sýkora
🌐 Project Page, 📄 Paper, 📚 BibTeX

FaceBlit is a system for real-time example-based face video stylization that retains textural details of the style in a semantically meaningful manner, i.e., strokes used to depict specific features in the style are present at the appropriate locations in the target image. As compared to previous techniques, our system preserves the identity of the target subject and runs in real-time without the need for large datasets nor lengthy training phase. To achieve this, we modify the existing face stylization pipeline of Fišer et al. [2017] so that it can quickly generate a set of guiding channels that handle identity preservation of the target subject while are still compatible with a faster variant of patch-based synthesis algorithm of Sýkora et al. [2019]. Thanks to these improvements we demonstrate a first face stylization pipeline that can instantly transfer artistic style from a single portrait to the target video at interactive rates even on mobile devices.

Teaser

Introduction

⚠️ DISCLAIMER: This is a research project, not a production-ready application, it may contain bugs!

This implementation is designed for two platforms - Windows and Android.

  • All C++ sources are located in FaceBlit/app/src/main/cpp, except for main.cpp and main_extension.cpp which can be found in FaceBlit/VS
  • All Java sources are stored in FaceBlit/app/src/main/java/texler/faceblit
  • Style exemplars (.png) are located in FaceBlit/app/src/main/res/drawable
  • Files holding detected landmarks (.txt) and lookup tables (.bytes) for each style are located in FaceBlit/app/src/main/res/raw
  • The algorithm assumes the style image and input video/image have the same resolution

Build and Run

  • Clone the repository git clone https://github.com/AnetaTexler/FaceBlit.git
  • The repository contains all necessary LIB files and includes for both platforms, except for the OpenCV DLL files for Windows
  • The project uses Dlib 19.21 which is added as one source file (FaceBlit/app/src/main/cpp/source.cpp) and will be compiled with other sources; so you don't have to worry about that

Windows

  • The OpenCV 4.5.0 is required, you can download the pre-built version directly from here and add opencv_world450d.dll and opencv_world450.dll files from opencv-4.5.0-vc14_vc15/build/x64/vc15/bin into your PATH
  • Open the solution FaceBlit/VS/FaceBlit.sln in Visual Studio (tested with VS 2019)
  • Provide a facial video/image or use existing sample videos and images in FaceBlit/VS/TESTS.
    • The input video/image has to be in resolution 768x1024 pixels (width x height)
  • In main() function in FaceBlit/VS/main.cpp, you can change parameters:
    • targetPath - path to input images and videos (there are some sample inputs in FaceBlit/VS/TESTS)
    • targetName - name of a target PNG image or MP4 video with extension (e.g. "target2.mp4")
    • styleName - name of a style with extension from the FaceBlit/app/src/main/res/drawable path (e.g. "style_het.png")
    • stylizeBG - true/false (true - stylize the whole image/video, does not always deliver pleasing results; false - stylize only face)
    • NNF_patchsize - voting patch size (odd number, ideal is 3 or 5); 0 for no voting
  • Finally, run the code and see results in FaceBlit/VS/TESTS

Android

  • OpenCV binaries (.so) are already included in the repository (FaceBlit/app/src/main/jniLibs)
  • Open the FaceBlit project in Android Studio (tested with Android Studio 4.1.3 and gradle 6.5), install NDK 21.0.6 via File > Settings > Appearance & Behavior > System Settings > Android SDK > SDK Tools and build the project.
  • Install the application on your mobile and face to the camera (works with both front and back). Press the right bottom button to display styles (scroll right to show more) and choose one. Wait a few seconds until the face detector loads, and enjoy the style transfer!

License

The algorithm is not patented. The code is released under the public domain - feel free to use it for research or commercial purposes.

Citing

If you find FaceBlit useful for your research or work, please use the following BibTeX entry.

@Article{Texler21-I3D,
    author    = "Aneta Texler and Ond\v{r}ej Texler and Michal Ku\v{c}era and Menglei Chai and Daniel S\'{y}kora",
    title     = "FaceBlit: Instant Real-time Example-based Style Transfer to Facial Videos",
    journal   = "Proceedings of the ACM in Computer Graphics and Interactive Techniques",
    volume    = "4",
    number    = "1",
    year      = "2021",
}
Owner
Aneta Texler
Aneta Texler
Mapping Conditional Distributions for Domain Adaptation Under Generalized Target Shift

This repository contains the official code of OSTAR in "Mapping Conditional Distributions for Domain Adaptation Under Generalized Target Shift" (ICLR 2022).

Matthieu Kirchmeyer 5 Dec 06, 2022
TalkingHead-1KH is a talking-head dataset consisting of YouTube videos

TalkingHead-1KH Dataset TalkingHead-1KH is a talking-head dataset consisting of YouTube videos, originally created as a benchmark for face-vid2vid: On

173 Dec 29, 2022
Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks

Continuous Sparsification Implementation of Continuous Sparsification (CS), a method based on l_0 regularization to find sparse neural networks, propo

Pedro Savarese 23 Dec 07, 2022
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

CoGAIL Table of Content Overview Installation Dataset Training Evaluation Trained Checkpoints Acknowledgement Citations License Overview This reposito

Jeremy Wang 29 Dec 24, 2022
Repo for "Physion: Evaluating Physical Prediction from Vision in Humans and Machines" submission to NeurIPS 2021 (Datasets & Benchmarks track)

Physion: Evaluating Physical Prediction from Vision in Humans and Machines This repo contains code and data to reproduce the results in our paper, Phy

Cognitive Tools Lab 38 Jan 06, 2023
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
Cross-media Structured Common Space for Multimedia Event Extraction (ACL2020)

Cross-media Structured Common Space for Multimedia Event Extraction Table of Contents Overview Requirements Data Quickstart Citation Overview The code

Manling Li 49 Nov 21, 2022
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
A PyTorch implementation of the Relational Graph Convolutional Network (RGCN).

Torch-RGCN Torch-RGCN is a PyTorch implementation of the RGCN, originally proposed by Schlichtkrull et al. in Modeling Relational Data with Graph Conv

Thiviyan Singam 66 Nov 30, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
Starter kit for getting started in the Music Demixing Challenge.

Music Demixing Challenge - Starter Kit 👉 Challenge page This repository is the Music Demixing Challenge Submission template and Starter kit! Clone th

AIcrowd 106 Dec 20, 2022
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 04, 2023
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.

Paddle-Adversarial-Toolbox Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle. Model Zoo Common FGS

AgentMaker 17 Nov 08, 2022
Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Wilson 1.7k Dec 30, 2022
[CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang

BNN - BN = ? Training Binary Neural Networks without Batch Normalization Codes for this paper BNN - BN = ? Training Binary Neural Networks without Bat

VITA 40 Dec 30, 2022
This repo generates the training data and the model for Morpheus-Deblend

Morpheus-Deblend This repo generates the training data and the model for Morpheus-Deblend. This is the active development repo for the project and as

Ryan Hausen 2 Apr 18, 2022
Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad to your characters in Modo.

Applicator Kit for Modo Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad with a TrueDepth camera to

Andrew Buttigieg 3 Aug 24, 2021