Global-Local Attention for Emotion Recognition

Overview

Global-Local Attention for Emotion Recognition

Requirements

  • Python 3
  • Install tensorflow (or tensorflow-gpu) >= 2.0.0
  • Install some other packages
pip install cython
pip install opencv-python==4.3.0.36 matplotlib numpy==1.18.5 dlib

Dataset

We provide the NCAER-S dataset with original images and extracted faces (a .txt file with 4 bounding box coordinate) in the NCAERS dataset.

The dataset can be downloaded at Google Drive

Note that the dataset and label should have structure like the followings:

NCAER-S 
│
└───images
│   │
│   └───class_1
│   │   │   img1.jpg
│   │   │   img2.jpg
│   │   │   ...
│   └───class_2
│       │   img1.jpg
│       │   img2.jpg
│       │   ...
│   
└───crop
│   │
│   └───class_1
│   │   │   img1.txt
│   │   │   img2.txt
│   │   │   ...
│   └───class_2
│       │   img1.txt
│       │   img2.txt
│       │   ...

Running

Our code supports these types of execution with argument -m or --mode:

#extract faces from <train, val or test> dataset (specified in config.py)
python run.py -m extract dataset_type=train

#train the model with config specified in the config.py
python run.py -m train 

#evaluate the trained model on the dataset <dataset_type>
python run.py -m eval --dataset_type=test --trained_weights=path/to/weights

Evaluation

Our trained model is available at weights/glamor-net/Model.

  • Firstly, please download the dataset and extract it into "data/" directory.
  • Then specified the path to the test data (images and crop):
config = config.copy({
    'test_images': 'path_to_test_images',
    'test_crop':   'path_to_test_cropped_faces' #(.txt files),
})
  • Run this command to evaluate the model. We are using the classification accuracy as our evaluation metric.
# Evaluate our model in the test set
python run.py -m eval --dataset_type=test --trained_weights=weights/glamor-net/Model

Training

Firstly please extract the faces from train set (val set is optional)

  • Specify the path to the dataset in config.py (train_images, val_images, test_images)
  • Specify the desired face-extracted output path in config.py (train_crop, val_crop, test_crop)
config = config.copy({

    'train_images': 'path_to_training_images',
    'train_crop':   'path_to_training_cropped_faces' #(.txt files),

    'val_images': 'path_to_validation_images',
    'val_crop':   'path_to_validation_cropped_faces' #(.txt files)

})
  • Perform face extraction on both dataset_type by running the commands:
python run.py -m extract --dataset_type=<train, val or test>

Start training:

# Train a new model from sratch
python run.py -m train 

# Continue training a model that you had trained earlier
python run.py -m train --resume=path/to/trained_weights

# Resume the last checkpoint model
python run.py -m train --resume=last

Prediction

We support prediction on single image or on images in a directory by running this command:

# Predict on single image
python predict.py --trained_weights=weights/glamor-net/Model --input=test_images/1.jpg --output=path/to/out/directory

# Predict on images in directory
python predict.py --trained_weights=weights/glamor-net/Model --input=test_images/ --output=out/

Use the help option to see a description of all available command line arguments

Owner
Minh Nhat Le
Hi
Minh Nhat Le
g2o: A General Framework for Graph Optimization

g2o - General Graph Optimization Linux: Windows: g2o is an open-source C++ framework for optimizing graph-based nonlinear error functions. g2o has bee

Rainer Kümmerle 2.5k Dec 30, 2022
Curated list of awesome GAN applications and demo

gans-awesome-applications Curated list of awesome GAN applications and demonstrations. Note: General GAN papers targeting simple image generation such

Minchul Shin 4.5k Jan 07, 2023
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 01, 2023
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

Yu Meng 38 Dec 12, 2022
Lipschitz-constrained Unsupervised Skill Discovery

Lipschitz-constrained Unsupervised Skill Discovery This repository is the official implementation of Seohong Park, Jongwook Choi*, Jaekyeom Kim*, Hong

Seohong Park 17 Dec 18, 2022
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022

Awesome Weak-Shot Learning

Awesome Weak-Shot Learning In weak-shot learning, all categories are split into non-overlapped base categories and novel categories, in which base cat

BCMI 162 Dec 30, 2022
Replication Code for "Self-Supervised Bug Detection and Repair" NeurIPS 2021

Self-Supervised Bug Detection and Repair This is the reference code to replicate the research in Self-Supervised Bug Detection and Repair in NeurIPS 2

Microsoft 85 Dec 24, 2022
Differential fuzzing for the masses!

NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

147 Dec 05, 2022
Repository for XLM-T, a framework for evaluating multilingual language models on Twitter data

This is the XLM-T repository, which includes data, code and pre-trained multilingual language models for Twitter. XLM-T - A Multilingual Language Mode

Cardiff NLP 112 Dec 27, 2022
A repository with exploration into using transformers to predict DNA ↔ transcription factor binding

Transcription Factor binding predictions with Attention and Transformers A repository with exploration into using transformers to predict DNA ↔ transc

Phil Wang 62 Dec 20, 2022
Source code for "Interactive All-Hex Meshing via Cuboid Decomposition [SIGGRAPH Asia 2021]".

Interactive All-Hex Meshing via Cuboid Decomposition Video demonstration This repository contains an interactive software to the PolyCube-based hex-me

Lingxiao Li 131 Dec 05, 2022
InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing

InsTrim The paper: InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing Build Prerequisite llvm-8.0-dev clang-8.0 cmake = 3.2 Make git cl

75 Dec 23, 2022
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 04, 2023
phylotorch-bito is a package providing an interface to BITO for phylotorch

phylotorch-bito phylotorch-bito is a package providing an interface to BITO for phylotorch Dependencies phylotorch BITO Installation Get the source co

Mathieu Fourment 2 Sep 01, 2022
[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021

Convolutional MLP ConvMLP: Hierarchical Convolutional MLPs for Vision Preprint link: ConvMLP: Hierarchical Convolutional MLPs for Vision By Jiachen Li

SHI Lab 143 Jan 03, 2023
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyone’s

Umar Masud 15 Dec 06, 2022
Kaggle-titanic - A tutorial for Kaggle's Titanic: Machine Learning from Disaster competition. Demonstrates basic data munging, analysis, and visualization techniques. Shows examples of supervised machine learning techniques.

Kaggle-titanic This is a tutorial in an IPython Notebook for the Kaggle competition, Titanic Machine Learning From Disaster. The goal of this reposito

Andrew Conti 800 Dec 15, 2022
This is an example implementation of the paper "Cross Domain Robot Imitation with Invariant Representation".

IR-GAIL This is an example implementation of the paper "Cross Domain Robot Imitation with Invariant Representation". Dependency The experiments are de

Zhao-Heng Yin 1 Jul 14, 2022
Official PyTorch implementation of PS-KD

Self-Knowledge Distillation with Progressive Refinement of Targets (PS-KD) Accepted at ICCV 2021, oral presentation Official PyTorch implementation of

61 Dec 28, 2022