Efficient face emotion recognition in photos and videos

Overview

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficient audiovisual analysis of dynamical changes in emotional state based on information-theoretic approach).

Our approach is described in the arXiv paper published at IEEE SISY 2021. The extended version of this paper is under considereation in the international journal.

All the models were pre-trained for face identification task using VGGFace2 dataset. In order to train PyTorch models, SAM code was borrowed.

We upload several models that obtained the state-of-the-art results for AffectNet dataset. The facial features extracted by these models lead to the state-of-the-art accuracy of face-only models on video datasets from EmotiW 2019, 2020 challenges: AFEW (Acted Facial Expression In The Wild), VGAF (Video level Group AFfect) and EngageWild.

Here are the accuracies measure on the testing set of above-mentioned datasets:

Model AffectNet (8 classes), original AffectNet (8 classes), aligned AffectNet (7 classes), original AffectNet (7 classes), aligned AFEW VGAF
mobilenet_7.h5 - - 64.71 - 55.35 68.92
enet_b0_8_best_afew.pt 60.95 60.18 64.63 64.54 59.89 66.80
enet_b0_8_best_vgaf.pt 61.32 61.03 64.57 64.89 55.14 68.29
enet_b0_7.pt - - 65.74 65.74 56.99 65.18
enet_b2_8.pt 63.025 62.40 66.29 - 57.78 70.23
enet_b2_7.pt - - 65.91 66.34 59.63 69.84

Please note, that we report the accuracies for AFEW and VGAFonly on the subsets, in which MTCNN detects facial regions. The code contains also computation of overall accuracy on the complete testing set, which is slightly lower due to the absence of faces or failed face detection.

In order to run our code on the datasets, please prepare them firstly using our TensorFlow notebooks: train_emotions.ipynb, AFEW_train.ipynb and VGAF_train.ipynb.

If you want to run our mobile application, please, run the following scripts inside mobile_app folder:

python to_tflite.py
python to_pytorchlite.py

Please be sure that EfficientNet models for PyTorch are based on old timm 0.4.5 package, so that exactly tis version should be installed by the following command:

pip install timm==0.4.5
Comments
  • can you share your Manually_Annotated_file cvs files?

    can you share your Manually_Annotated_file cvs files?

    I test affectnet validation data, but get 0.5965 using enet_b2_8.pt. can you share Manually_Annotated_file validation.csv and training.csv to me for debug?

    opened by Dian-Yi 10
  • affectnet march2021 version training script update

    affectnet march2021 version training script update

    As mentioned in #14 , we have different version of affectnet versions. I updated pytorch training script for AffectNet march2021. Two notes are

    • I used horizontal flip for training augmentation,
    • and we have different emotion order in logit.
    opened by sunggukcha 6
  • Confidence range for inference using python library

    Confidence range for inference using python library

    Hi,

    First of all, thank you so much for such a convenient setup to use!

    I'm using the python library face emotion in my code with the model_name = 'enet_b0_8_best_afew'. I was wondering what is the range of the confidence returned by the library or this model in particular. I wasn't able to figure that out.

    Thank you

    opened by varunsingh3000 4
  • Preprocessing of images to run inference

    Preprocessing of images to run inference

    Hello, thank you very much for your work.

    I am trying to preprocess a batch of images (I have my own dataset) the way you prepared your data. I'm following the notebook train_emotions.ipynb as it is in Tensforflow and I'm using that framework.

    I have a question about the steps of the preprocessing, so I would like to ask you if you can tell me the correct steps. These are the steps I'm following, let me know if I'm right or if something is missing:

    1. I already have my images with the faces detected and croppped, i.e, I have a dataset full of faces like this frame9

    2. img = cv2.imread(img_path)

    3. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    4. img = cv2.resize(img,(224,224))

    5. Then your notebook shows you make a normalization def mobilenet_preprocess_input(x,**kwargs): x[..., 0] -= 103.939 x[..., 1] -= 116.779 x[..., 2] -= 123.68 return x preprocessing_function=mobilenet_preprocess_input

    Here I am having an issue because I cannot cast the subtraction operation between an integer and a float, so I changed it to

    def mobilenet_preprocess_input(x,**kwargs): x[..., 0] = x[..., 0] - 103.939 x[..., 1] = x[..., 1] - 116.779 x[..., 2] = x[..., 2] - 123.68 return x preprocessing_function=mobilenet_preprocess_input

    So, let me know if the process I'm following is correct or if there's something missing.

    Thank you!

    opened by isa-tr 4
  • AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    Excuse me, this problem occurs when using the ‘enet_b2_7.pt’ model to test. I completed it according to the steps you gave, but I really couldn't find the reason for this problem. Do you have any suggestions?

    opened by evercy 4
  • Age gender ethinicity model giving same output for different results

    Age gender ethinicity model giving same output for different results

    `class CNN(object):

    def __init__(self, model_filepath):
    
        self.model_filepath = model_filepath
        self.load_graph(model_filepath = self.model_filepath)
    
    def load_graph(self, model_filepath):
        print('Loading model...')
        self.graph = tf.Graph()
        self.sess = tf.compat.v1.InteractiveSession(graph = self.graph)
    
        with tf.compat.v1.gfile.GFile(model_filepath, 'rb') as f:
            graph_def = tf.compat.v1.GraphDef()
            graph_def.ParseFromString(f.read())
    
        print('Check out the input placeholders:')
        nodes = [n.name + ' => ' +  n.op for n in graph_def.node if n.op in ('Placeholder')]
        for node in nodes:
            print(node)
    
        # Define input tensor
        self.input = tf.compat.v1.placeholder(np.float32, shape = [None, 224, 224, 3], name='input')
        # self.dropout_rate = tf.placeholder(tf.float32, shape = [], name = 'dropout_rate')
    
        tf.import_graph_def(graph_def, {'input_1': self.input})
    
        print('Model loading complete!')
    
        
        # Get layer names
        layers = [op.name for op in self.graph.get_operations()]
        for layer in layers:
            print(layer)
    
    def test(self, data):
    
        # Know your output node name
        output_tensor1,output_tensor2 ,output_tensor3  = self.graph.get_tensor_by_name('import/age_pred/Softmax: 0'),self.graph.get_tensor_by_name('import/gender_pred/Sigmoid: 0'),self.graph.get_tensor_by_name('import/ethnicity_pred/Softmax: 0')
        output = self.sess.run([output_tensor1,output_tensor2 ,output_tensor3], feed_dict = {self.input: data})
    
        return output`
    

    Using this code load "age_gender_ethnicity_224_deep-03-0.13-0.97-0.88.pb" and predict on it. But when predicting on images, every time I am getting same output array.

    [array([[0.01319346, 0.00229602, 0.00176407, 0.00270929, 0.01408699, 0.00574261, 0.00756087, 0.01012164, 0.01221055, 0.01821703, 0.01120028, 0.00936489, 0.01003029, 0.00912451, 0.00813381, 0.00894791, 0.01277262, 0.01034999, 0.01053109, 0.0133063 , 0.01423471, 0.01610439, 0.01528896, 0.01825454, 0.01722076, 0.01933933, 0.01908059, 0.01899827, 0.01919533, 0.0278129 , 0.02204996, 0.02146631, 0.02125309, 0.02146868, 0.02230236, 0.02054285, 0.02096066, 0.01976574, 0.01990371, 0.02064857, 0.01843528, 0.01697922, 0.01610838, 0.01458549, 0.01581902, 0.01377539, 0.01298613, 0.01378927, 0.01191105, 0.01335083, 0.01154454, 0.01118198, 0.01019558, 0.01038121, 0.00920709, 0.00902615, 0.00936321, 0.00969135, 0.00867239, 0.00838663, 0.00797724, 0.00756043, 0.00890809, 0.00758041, 0.00743711, 0.00584346, 0.00555749, 0.00639214, 0.0061864 , 0.00784793, 0.00532241, 0.00567684, 0.00481544, 0.0052173 , 0.00513186, 0.00394571, 0.00415856, 0.00384584, 0.00452774, 0.0041736 , 0.00328163, 0.00327138, 0.00297012, 0.00369216, 0.00284221, 0.00255897, 0.00285459, 0.00232105, 0.00228869, 0.00218005, 0.0021927 , 0.00236659, 0.00233843, 0.00204793, 0.00209861, 0.00231407, 0.00145706, 0.00179674, 0.00186183, 0.00221309]], dtype=float32), array([[0.62949586]], dtype=float32), array([[0.21338916, 0.19771543, 0.19809113, 0.19525865, 0.19554558]], dtype=float32)] Is there something am missing or is this .pb file not meant for predicting?

    opened by sneakatyou 4
  • Provide the validation script/notebook.

    Provide the validation script/notebook.

    Hi,

    I am fond of your works and paper, but I can not find any validation script to validate your result, especially the highest result with efficientNetB2-8 classes-EffectNet.

    Or could you please provide a separate script to pre-process the input images then we can validate the provided weights on your GitHub repository?

    Thank you,

    opened by ltkhang 4
  • A few suggestions.

    A few suggestions.

    Hello!

    I have a couple of ideas:

    1. Could you, please, add text description about difference between models, especially between b0 and b2 general types?
    2. Please consider adding hsemotion-onnx package to the pip repository.
    opened by ioctl-user 3
  • Can not load pretrained models

    Can not load pretrained models

     File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/timm/models/efficientnet_blocks.py", line 47, in forward
        return x * self.gate(x_se)
      File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'SqueezeExcite' object has no attribute 'gate'
    
    opened by DefTruth 3
  • A error when runing codes.

    A error when runing codes.

    When runing AFEW_train.ipynb, an error occured:

    could not broadcast input array from shape (0,112,3) into shape (60,112,3) at facial_anylysis.py line 274 : tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:]

    why dose this occured? could you please fixed it?

    opened by kiva12138 3
  • Valence and arousal

    Valence and arousal

    Hello again! I've read your paper and I've seen that you use the circumplex model's variables arousal and valence. How do those variable appears in the code? I can't find them :( Thank you, Amaia

    opened by AmaiaBiomedicalEngineer 2
  • Question about this work.

    Question about this work.

    Dear Andrey Savchenko,

    I'm a student and going to build a small system to detect student's emotions for my thesis. After finding a solution, I found your job. But I can't run https://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_emotions.ipynb by current AFFECT dataset's version. Please correct me if I'm wrong. My question is: Can I run this workhttps://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_affectnet_march2021_pytorch.ipynb with MobileNet. Because I tend to build small applications to detect emotions from client site then send result to server.

    Many thanks,

    Son Nguyen.

    opened by sonnguyen1996 2
Releases(v0.2.1)
Owner
Andrey Savchenko
Andrey Savchenko
Implementing DropPath/StochasticDepth in PyTorch

%load_ext memory_profiler Implementing Stochastic Depth/Drop Path In PyTorch DropPath is available on glasses my computer vision library! Introduction

Francesco Saverio Zuppichini 13 Jan 05, 2023
🧑‍🔬 verify your TEAL program by experiment and observation

Graviton - Testing TEAL with Dry Runs Tutorial Local Installation The following instructions assume that you have make available in your local environ

Algorand 18 Jan 03, 2023
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023
PyTorch implementation of "A Two-Stage End-to-End System for Speech-in-Noise Hearing Aid Processing"

Implementation of the Sheffield entry for the first Clarity enhancement challenge (CEC1) This repository contains the PyTorch implementation of "A Two

10 Aug 19, 2022
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed+Megatron trained the world's most powerful language model: MT-530B DeepSpeed is hiring, come join us! DeepSpeed is a deep learning optimizat

Microsoft 8.4k Dec 28, 2022
Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at [email protected]

TableParser Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at DS3 Lab 11 Dec 13, 2022

Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization", Proc. IEEE ISM 2021

PGL-SUM: Combining Global and Local Attention with Positional Encoding for Video Summarization PyTorch Implementation of PGL-SUM From "PGL-SUM: Combin

Evlampios Apostolidis 35 Dec 22, 2022
PyTorch implementation of "VRT: A Video Restoration Transformer"

VRT: A Video Restoration Transformer Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc Van Gool Computer

Jingyun Liang 837 Jan 09, 2023
Source code for "Interactive All-Hex Meshing via Cuboid Decomposition [SIGGRAPH Asia 2021]".

Interactive All-Hex Meshing via Cuboid Decomposition Video demonstration This repository contains an interactive software to the PolyCube-based hex-me

Lingxiao Li 131 Dec 05, 2022
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022

Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling

VAE with Volume-Preserving Flows This is a PyTorch implementation of two volume-preserving flows as described in the following papers: Tomczak, J. M.,

Jakub Tomczak 87 Dec 26, 2022
Riemann Noise Injection With PyTorch

Riemann Noise Injection - PyTorch A module for modeling GAN noise injection based on Riemann geometry, as described in Ruili Feng, Deli Zhao, and Zhen

2 May 27, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
Latent Network Models to Account for Noisy, Multiply-Reported Social Network Data

VIMuRe Latent Network Models to Account for Noisy, Multiply-Reported Social Network Data. If you use this code please cite this article (preprint). De

6 Dec 15, 2022
HeartRate detector with ArduinoandPython - Use Arduino and Python create a heartrate detector.

Syllabus of Contents Syllabus of Contents Introduction Of Project Features Develop With Python code introduction Installation License Developer Contac

1 Jan 05, 2022
Implementation of the bachelor's thesis "Real-time stock predictions with deep learning and news scraping".

Real-time stock predictions with deep learning and news scraping This repository contains a partial implementation of my bachelor's thesis "Real-time

David Álvarez de la Torre 0 Feb 09, 2022