A free, multiplatform SDK for real-time facial motion capture using blendshapes, and rigid head pose in 3D space from any RGB camera, photo, or video.

Overview


mocap4face by Facemoji
mocap4face by Facemoji

mocap4face by Facemoji is a free, multiplatform SDK for real-time facial motion capture based on Facial Action Coding System or (FACS). It provides real-time FACS-derived blendshape coefficients, and rigid head pose in 3D space from any mobile camera, webcam, photo, or video enabling live animation of 3D avatars, digital characters, and more.

After fetching the input from one of the mentioned sources, mocap4face SDK produces data in ARKit-compatible blendshapes, i.e., morph targets weight values as a per-frame expression shown in the video below. Useful for, e.g., animating a 2D or 3D avatar in a way that mimics the user's facial expressions in real-time à la Apple Memoji but without the need of a hardware-based TrueDepth Camera.

With mocap4face, you can drive live avatars, build Snapchat-like lenses, AR experiences, face filters that trigger actions, VTubing apps, and more with as little energy impact and CPU/GPU use as possible. As an example, check out how the popular avatar live-streaming app REALITY is using our SDK.

Please Star us on GitHub—it motivates us a lot!

📋 Table of Content

🤓 Tech Specs

Key Features

  • 42 tracked facial expressions via blendshapes
  • Eye tracking including eye gaze vector
  • Tongue tracking
  • Light & fast, just 3MB ML model size
  • ≤ ±50° pitch, ≤ ±40° yaw and ≤ ±30° roll tracking coverage

🤳 Input

  • All RGB camera
  • Photo
  • Video

📦 Output

  • ARKit-compatible blendshapes
  • Head position and scale in 2D and 3D
  • Head rotation in world coordinates

Performance

  • 50 FPS on Pixel 4
  • 60 FPS on iPhone SE (1st gen)
  • 90 FPS on iPhone X or newer

💿 Installation

Prerequisites

  1. Get a new developer account at studio.facemoji.co
  2. Generate a unique API key for your app
  3. Paste the API key to your source code

iOS

  1. Open the sample XCode project and run the demo on your iOS device
  2. To use the SDK in your project, either use the bundled XCFramework directly or use the Swift Package manager (this repository also serves as a Swift PM repository)

Android

  1. Open the sample project in Android Studio and run the demo on your Android device
  2. Add this repository to the list of your Maven repositories in your root build.gradle, for example:
allprojects {
    repositories {
        google()
        mavenCentral()
        // Any other repositories here...

        maven {
            name = "Facemoji"
            url = uri("https://facemoji.jfrog.io/artifactory/default-maven-local/")
        }
    }
}
  1. To use the SDK in your project, add implementation 'co.facemoji:mocap4face:0.1.0' to your Gradle dependencies

🚀 Use Cases

  • Live avatar experiences
  • Snapchat-like lense
  • AR experiences
  • VTubing apps
  • Live streaming apps
  • Face filters
  • AR games with facial triggers
  • Beauty AR
  • Virtual try-on
  • Play to earn games

❤️ Links

📄 License

This library is provided under the Facemoji SDK License Agreement—see LICENSE.

The sample code in this repository is provided under the Facemoji Samples License

Notices

OSS used in mocap4face SDK:

Comments
  • Mainactivity  re-opened cannot be traced

    Mainactivity re-opened cannot be traced

    Only the first open MainActivity can be tracked. when back to SplashActivity and reopen Mainactivity the track result is null.

    The test code is as follows(modify SplashAtivity):

    override fun onResume() {
        super.onResume()
        Timer("Splash", false).schedule(5000) {
            startMainActivity()
        }
    }
    
    bug 
    opened by cike978 9
  • can not find any documents about how to build a app using facemoji SDK

    can not find any documents about how to build a app using facemoji SDK

    i want to build a app like the offical facemoji app, but the demo code is so simple, the SDK even has no document.

    can anyone tell me how to use the SDK to build a facemoji app ?

    opened by danlanxiaohei 3
  • Fix

    Fix "artifact not found for target 'mocap4face'" error

    Issue

    • I got "artifact not found for target 'mocap4face'" error when I tried to load mocap4face Swift Package by Xcode 13.3 mocap4face_err
      • The target name should be exactly the same as the framework name (case sensitive)

    Changes

    • Change the target name from mocap4face to Mocap4Face to match the framework name
    opened by magicien 2
  • Any plans for Windows & MacOS Support?

    Any plans for Windows & MacOS Support?

    I'm guessing if this is already cross-platform on Android, Web, and iOS that this an upcoming features. Just curious since we'd love to use this everywhere if possible

    opened by aaronn 2
  • [Question]: Is this Lib compatible with react-native?

    [Question]: Is this Lib compatible with react-native?

    It would be incredibly awesome to include this library in my React-Native start-up App but I wonder whether there's React-Native compatible SDK?

    Thanks!!!

    enhancement 
    opened by shamxeed 2
  • OpenGLResizer20: setup OES texture: glError 1282: invalid operation

    OpenGLResizer20: setup OES texture: glError 1282: invalid operation

    使用 Android 版本打开 Activity 正常,关闭页面再次启用的时候出现 OpenGLResizer20: setup OES texture: glError 1282: invalid operation

    onCameraImageArkitLink(cameraTexture: OpenGLTexture)

    cameraTexture = null

    opened by pengzhenkun 1
  • Stop face tracking

    Stop face tracking

    Hello. Thanks for the great face tracking library!

    How can I stop tracking? Something like a dispose/destroy function?

    I am using this library in web (js)

    opened by Nawarius 1
  • Status of macOS build?

    Status of macOS build?

    Has any new work been done to bring this to the Mac via catalyst?

    We primarily just want to be able to use the full camera width when running under M1 catalyst. We don't need the full feature-set, but macOS forces "non-mac" frameworks to use a cropped iPhone-like camera view even on the Mac which has a much wider view.

    opened by aaronn 0
  • Problems with one eye open and the other closed

    Problems with one eye open and the other closed

    Platform Android With one eye open and the other closed, the output is eyeBlink_L 0.7 and eyeBlink_R 0.5 Expected result is eyeBlink_L 0.7 and eyeBlink_R 0

    bug 
    opened by will4621d 2
Releases(0.3.0)
Owner
Facemoji
Avatars for apps and games.
Facemoji
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023
Crowd-sourced Annotation of Human Motion.

Motion Annotation Tool Live: https://motion-annotation.humanoids.kit.edu Paper: The KIT Motion-Language Dataset Installation Start by installing all P

Matthias Plappert 4 May 25, 2020
EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

EncT5 (Unofficial) Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks About Finetune T5 model for classification & r

Jangwon Park 34 Jan 01, 2023
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
Used to record WKU's utility bills on a regular basis.

WKU水电费小助手 一个用于定期记录WKU水电费的脚本 Looking for English Readme? 背景 由于WKU校园内的水电账单系统时常存在扣费延迟的现象,而补扣的费用缺乏令人信服的证明。不少学生为费用摸不着头脑,但也没有申诉的依据。为了更好地掌握水电费使用情况,留下一手证据,我开源

2 Jul 21, 2022
Vision transformers (ViTs) have found only limited practical use in processing images

CXV Convolutional Xformers for Vision Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-o

Cloudwalker 23 Sep 10, 2022
A template repository for submitting a job to the Slurm Cluster installed at the DISI - University of Bologna

Cluster di HPC con GPU per esperimenti di calcolo (draft version 1.0) Per poter utilizzare il cluster il primo passo è abilitare l'account istituziona

20 Dec 16, 2022
The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

Will Thompson 166 Jan 04, 2023
Chinese Advertisement Board Identification(Pytorch)

Chinese-Advertisement-Board-Identification. We use YoloV5 to extract the ROI of the location of the chinese word. Next, we sort the bounding box and recognize every chinese words which we extracted.

Li-Wei Hsiao 12 Jul 21, 2022
Neural Oblivious Decision Ensembles

Neural Oblivious Decision Ensembles A supplementary code for anonymous ICLR 2020 submission. What does it do? It learns deep ensembles of oblivious di

25 Sep 21, 2022
Pytorch code for "State-only Imitation with Transition Dynamics Mismatch" (ICLR 2020)

This repo contains code for our paper State-only Imitation with Transition Dynamics Mismatch published at ICLR 2020. The code heavily uses the RL mach

20 Sep 08, 2022
Pytorch implementation of BRECQ, ICLR 2021

BRECQ Pytorch implementation of BRECQ, ICLR 2021 @inproceedings{ li&gong2021brecq, title={BRECQ: Pushing the Limit of Post-Training Quantization by Bl

Yuhang Li 148 Dec 28, 2022
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
Create and implement a deep learning library from scratch.

In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The Proj

Rishabh Bali 22 Aug 23, 2022
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022
Official implementation of the paper "Lightweight Deep CNN for Natural Image Matting via Similarity Preserving Knowledge Distillation"

Lightweight-Deep-CNN-for-Natural-Image-Matting-via-Similarity-Preserving-Knowledge-Distillation Introduction Accepted at IEEE Signal Processing Letter

DongGeun-Yoon 19 Jun 07, 2022
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.

PyAF (Python Automatic Forecasting) PyAF is an Open Source Python library for Automatic Forecasting built on top of popular data science python module

CARME Antoine 405 Jan 02, 2023
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 01, 2023