Anime Face Detector using mmdet and mmpose

Overview

Anime Face Detector

PyPI version Downloads Open In Colab Hugging Face Spaces

This is an anime face detector using mmdetection and mmpose.

(To avoid copyright issues, I use generated images by the TADNE model here.)

The model detects near-frontal anime faces and predicts 28 landmark points.

The result of k-means clustering of landmarks detected in real images:

The mean images of real images belonging to each cluster:

Installation

pip install openmim
mim install mmcv-full
mim install mmdet
mim install mmpose

pip install anime-face-detector

This package is tested only on Ubuntu.

Usage

Open In Colab

import cv2

from anime_face_detector import create_detector

detector = create_detector('yolov3')
image = cv2.imread('assets/input.jpg')
preds = detector(image)
print(preds[0])
{'bbox': array([2.2450244e+03, 1.5940223e+03, 2.4116030e+03, 1.7458063e+03,
        9.9987185e-01], dtype=float32),
 'keypoints': array([[2.2593938e+03, 1.6680436e+03, 9.3236601e-01],
        [2.2825300e+03, 1.7051841e+03, 8.7208068e-01],
        [2.3412151e+03, 1.7281011e+03, 1.0052248e+00],
        [2.3941377e+03, 1.6825046e+03, 5.9705663e-01],
        [2.4039426e+03, 1.6541921e+03, 8.7139702e-01],
        [2.2625220e+03, 1.6330233e+03, 9.7608268e-01],
        [2.2804077e+03, 1.6408495e+03, 1.0021354e+00],
        [2.2969380e+03, 1.6494972e+03, 9.7812974e-01],
        [2.3357908e+03, 1.6453258e+03, 9.8418534e-01],
        [2.3475276e+03, 1.6355408e+03, 9.5060223e-01],
        [2.3612463e+03, 1.6262626e+03, 9.0553057e-01],
        [2.2682278e+03, 1.6631940e+03, 9.5465249e-01],
        [2.2814783e+03, 1.6616484e+03, 9.0782022e-01],
        [2.2987590e+03, 1.6692812e+03, 9.0256405e-01],
        [2.2833625e+03, 1.6879142e+03, 8.0303693e-01],
        [2.2934949e+03, 1.6909009e+03, 8.9718056e-01],
        [2.3021218e+03, 1.6863715e+03, 9.3882143e-01],
        [2.3471826e+03, 1.6636573e+03, 9.5727938e-01],
        [2.3677822e+03, 1.6540554e+03, 9.4890594e-01],
        [2.3889211e+03, 1.6611255e+03, 9.5125675e-01],
        [2.3575544e+03, 1.6800433e+03, 8.5919142e-01],
        [2.3688926e+03, 1.6800665e+03, 8.3275074e-01],
        [2.3804905e+03, 1.6761322e+03, 8.4160626e-01],
        [2.3165366e+03, 1.6947096e+03, 9.1840971e-01],
        [2.3282458e+03, 1.7104808e+03, 8.8045174e-01],
        [2.3380054e+03, 1.7114034e+03, 8.8357794e-01],
        [2.3485500e+03, 1.7080273e+03, 8.6284375e-01],
        [2.3378748e+03, 1.7118135e+03, 9.7880816e-01]], dtype=float32)}

Pretrained models

Here are the pretrained models. (They will be automatically downloaded when you use them.)

Demo (using Gradio)

Hugging Face Spaces

Run locally

pip install gradio
git clone https://github.com/hysts/anime-face-detector
cd anime-face-detector

python demo_gradio.py

Links

General

Anime face detection

Anime face landmark detection

Others

Comments
  • How do you implement clustering of face landmarks?

    How do you implement clustering of face landmarks?

    Thank you for sharing this wonderful project. I am curious about how do you implement clustering of face landmarks. Can you describe that in detail? Or can you sharing some related papers or projects? Thanks in advance.

    opened by Adenialzz 8
  • Citation Issue

    Citation Issue

    Hi, @hysts

    First of all, thank you so much for the great work!

    I'm a graduate student and have used your pretrained model to generate landmark points as ground truth. I'm currently finishing up my thesis writing and want to cite your github repo.

    I don't known if I overlooked something, but I couldn't find the citation information in the README page. Is there anyway to cite this repo?

    Thank you.

    opened by zeachkstar 2
  • colab notebook encounters problem while installing dependencies

    colab notebook encounters problem while installing dependencies

    Hi, the colab notebook looks broken. I used it about 2 weeks ago with out any problem. Basically in dependcie installing phase, when executing "mim install mmcv-full", colab will ask if I want to use an older version to replace pre-installed newer version. I had to choose to install older version to make the detector works.

    I retried the colab notebook yesterday, this time if I still chose to replace preinstalled v1.5.0 by v.1.4.2, it will stuck at "building wheel for mmcv-full" for 20 mins and fail. If I chose not to replace preinstalled version and skip mmcv-full, the dependcie installing phase could be completed without error. But when I ran the detector, I got an error "KeyError: 'center'"

    Please help.

    KeyError                                  Traceback (most recent call last)
    [<ipython-input-8-2cb6d21c10b9>](https://localhost:8080/#) in <module>()
         12 image = cv2.imread(input)
         13 
    ---> 14 preds = detector(image)
    
    6 frames
    [/content/anime-face-detector/anime_face_detector/detector.py](https://localhost:8080/#) in __call__(self, image_or_path, boxes)
        145                 boxes = [np.array([0, 0, w - 1, h - 1, 1])]
        146         box_list = [{'bbox': box} for box in boxes]
    --> 147         return self._detect_landmarks(image, box_list)
    
    [/content/anime-face-detector/anime_face_detector/detector.py](https://localhost:8080/#) in _detect_landmarks(self, image, boxes)
        101             format='xyxy',
        102             dataset_info=self.dataset_info,
    --> 103             return_heatmap=False)
        104         return preds
        105 
    
    [/usr/local/lib/python3.7/dist-packages/mmcv/utils/misc.py](https://localhost:8080/#) in new_func(*args, **kwargs)
        338 
        339             # apply converted arguments to the decorated method
    --> 340             output = old_func(*args, **kwargs)
        341             return output
        342 
    
    [/usr/local/lib/python3.7/dist-packages/mmpose/apis/inference.py](https://localhost:8080/#) in inference_top_down_pose_model(model, imgs_or_paths, person_results, bbox_thr, format, dataset, dataset_info, return_heatmap, outputs)
        385             dataset_info=dataset_info,
        386             return_heatmap=return_heatmap,
    --> 387             use_multi_frames=use_multi_frames)
        388 
        389         if return_heatmap:
    
    [/usr/local/lib/python3.7/dist-packages/mmpose/apis/inference.py](https://localhost:8080/#) in _inference_single_pose_model(model, imgs_or_paths, bboxes, dataset, dataset_info, return_heatmap, use_multi_frames)
        245                 data['image_file'] = imgs_or_paths
        246 
    --> 247         data = test_pipeline(data)
        248         batch_data.append(data)
        249 
    
    [/usr/local/lib/python3.7/dist-packages/mmpose/datasets/pipelines/shared_transform.py](https://localhost:8080/#) in __call__(self, data)
        105         """
        106         for t in self.transforms:
    --> 107             data = t(data)
        108             if data is None:
        109                 return None
    
    [/usr/local/lib/python3.7/dist-packages/mmpose/datasets/pipelines/top_down_transform.py](https://localhost:8080/#) in __call__(self, results)
        287         joints_3d = results['joints_3d']
        288         joints_3d_visible = results['joints_3d_visible']
    --> 289         c = results['center']
        290         s = results['scale']
        291         r = results['rotation']
    
    KeyError: 'center'
    
    opened by zhongzishi 2
  • Question about the annotation tool for landmark

    Question about the annotation tool for landmark

    Thanks for your great work! May I ask which tool do you use to annotate the landmarks? I find the detector seems to perform not so well on the manga images. So I want to manually annotate some manga images. Besides, when you trained the landmarks detector, did you train the model from scratch or fine-tune on the pretrained mmpose model?

    opened by mrbulb 2
  • Question About Training Dataset

    Question About Training Dataset

    Thanks for your work! It’s very interesting!! May I ask you some questions? Did you manually annotate landmarks for the images generated by the TADNE model? And how many images does your training dataset include?

    opened by GrayNiwako 2
  • how to implement anime face identification with this detector

    how to implement anime face identification with this detector

    Thanks for sharing such a nice work! I was wondering if it is possible to implement anime face identification based on this detector. Do you have any plan on this? Will we have a good identification accuracy using this detector? Many thanks!

    opened by rsindper 1
  • There is an error in demo.ipynb

    There is an error in demo.ipynb

    First of all, thank you for sharing your program.

    Today I tried to run the program in GoogleColab and got the following error in the import anime_face_detector section. Do you know any solutions?

    Thank you.

    ImportError Traceback (most recent call last) in () 5 import numpy as np 6 ----> 7 import anime_face_detector

    7 frames /usr/lib/python3.7/importlib/init.py in import_module(name, package) 125 break 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) 128 129

    ImportError: /usr/local/lib/python3.7/dist-packages/mmcv/_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol:_ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE

    opened by 283pm 1
  • Gradio demo on blocks organization

    Gradio demo on blocks organization

    Hi, thanks for making a gradio demo for this on Huggingface https://huggingface.co/spaces/hysts/anime-face-detector, looks great with the new 3.0 design as well. Gradio has a event for the new Blocks API https://huggingface.co/Gradio-Blocks, it would be great if you can join to make a blocks version of this demo or another demo thanks!

    opened by AK391 1
  • Re-thinking anime(Illustration/draw/manga) character face detection

    Re-thinking anime(Illustration/draw/manga) character face detection

    awesome work!

    especially face clustering very neat

    this work reminds me of

    How can Illustration be aligned and what can I do with these 2d landmark?

    Scaling and rotating images and crop: FFHQ aligned code and webtoon result

    Artstation-Artistic-face-HQ which counts as Illustration Use FFHQ aligned

    and new FFHQ aligned https://arxiv.org/abs/2109.09378

    but anime Illustration is not the same as real FFHQ, where perspective-related (pose) means destroying the centre, and local parts exaggeration destroying the global

    [DO.1] directly k-mean dictionary (run a dataset) proximity aligned

    Mention this analysis

    [DO.2] because there are not many features can use, add continuous 2D spatial feature (pred), more point and even beyond

    this need hack model (might proposed)

    [DO.3] Should be used directly as a filter to assist with edge extraction (maximum reserve features)

    guide VAE, SGF generation, or anime cross image Synthesis

    if the purpose is not to train the generation model, probably use is to extend the dataset. if training to generate models, will greatly effect generated eye+chin centre aligned visual lines don't keeping real image features just polylines

    Or need more key points in clustering, det box pts (easy [DO.4]), and beyond to the whole image

    and thank for your reading this

    opened by koke2c95 1
  • add polylines visualize and video test on colab demo.ipynb

    add polylines visualize and video test on colab demo.ipynb

    result

    polylines visualize test

    by MPEG encoded that can't play properly (transcoded)

    https://user-images.githubusercontent.com/26929386/141799892-0b496ada-66b4-4349-ab72-49aae2317ce4.mp4

    comments

    • not yet tested on gpu

    • cleared all output

    • didn't remove function detect , just copy the from demo_gradio.py

    • polylines visualize function can be simplified

    • polylines visualize function can be customize (color, thickness, groups)

    opened by koke2c95 1
An Object Oriented Programming (OOP) interface for Ontology Web language (OWL) ontologies.

Enabling a developer to use Ontology Web Language (OWL) along with its reasoning capabilities in an Object Oriented Programming (OOP) paradigm, by pro

TheEngineRoom-UniGe 7 Sep 23, 2022
[ICCV 2021] Official PyTorch implementation for Deep Relational Metric Learning.

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

Borui Zhang 39 Dec 10, 2022
Generic Foreground Segmentation in Images

Pixel Objectness The following repository contains pretrained model for pixel objectness. Please visit our project page for the paper and visual resul

Suyog Jain 157 Nov 21, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022
BigbrotherBENL - Face recognition on the Big Brother episodes in Belgium and the Netherlands.

BigbrotherBENL - Face recognition on the Big Brother episodes in Belgium and the Netherlands. Keeping statistics of whom are most visible and recognisable in the series and wether or not it has an im

Frederik 2 Jan 04, 2022
Pca-on-genotypes - Mini bioinformatics project - PCA on genotypes

Mini bioinformatics project: PCA on genotypes This repo contains the code from t

Maria Nattestad 8 Dec 04, 2022
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 57 Jan 01, 2023
Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom

Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom Sample on-line plotting while training(avg loss)/testing(writ

Jingwei Zhang 269 Nov 15, 2022
Compact Bidirectional Transformer for Image Captioning

Compact Bidirectional Transformer for Image Captioning Requirements Python 3.8 Pytorch 1.6 lmdb h5py tensorboardX Prepare Data Please use git clone --

YE Zhou 19 Dec 12, 2022
NovelD: A Simple yet Effective Exploration Criterion

NovelD: A Simple yet Effective Exploration Criterion Intro This is an implementation of the method proposed in NovelD: A Simple yet Effective Explorat

29 Dec 05, 2022
Unsupervised Representation Learning via Neural Activation Coding

Neural Activation Coding This repository contains the code for the paper "Unsupervised Representation Learning via Neural Activation Coding" published

yookoon park 5 May 26, 2022
3D Human Pose Machines with Self-supervised Learning

3D Human Pose Machines with Self-supervised Learning Keze Wang, Liang Lin, Chenhan Jiang, Chen Qian, and Pengxu Wei, “3D Human Pose Machines with Self

Chenhan Jiang 398 Dec 20, 2022
Bi-level feature alignment for versatile image translation and manipulation (Under submission of TPAMI)

Bi-level feature alignment for versatile image translation and manipulation (Under submission of TPAMI) Preparation Clone the Synchronized-BatchNorm-P

Fangneng Zhan 12 Aug 10, 2022
RealTime Emotion Recognizer for Machine Learning Study Jam's demo

Emotion recognizer Table of contents Clone project Dataset Install dependencies Main program Demo 1. Clone project git clone https://github.com/GDSC20

Google Developer Student Club - UIT 1 Oct 05, 2021
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
Easy and Efficient Object Detector

EOD Easy and Efficient Object Detector EOD (Easy and Efficient Object Detection) is a general object detection model production framework. It aim on p

381 Jan 01, 2023
The code of paper "Block Modeling-Guided Graph Convolutional Neural Networks".

Block Modeling-Guided Graph Convolutional Neural Networks This repository contains the demo code of the paper: Block Modeling-Guided Graph Convolution

22 Dec 08, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
Recommendation algorithms for large graphs

Fast recommendation algorithms for large graphs based on link analysis. License: Apache Software License Author: Emmanouil (Manios) Krasanakis Depende

Multimedia Knowledge and Social Analytics Lab 27 Jan 07, 2023
Implementation of the Remixer Block from the Remixer paper, in Pytorch

Remixer - Pytorch Implementation of the Remixer Block from the Remixer paper, in Pytorch. It claims that substituting the feedforwards in transformers

Phil Wang 35 Aug 23, 2022