Official Implementation for the paper DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification

Overview

DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification

Official Implementation for the paper DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification (2021) by Hai Phan and Anh Nguyen.

If you use this software, please consider citing:

@article{hai2021deepface,
  title={DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification},
  author={Hai Phan, Anh Nguyen},
  journal={arXiv preprint arXiv:2112.04016},
  year={2021}
}

1. Requirements

Python >= 3.5
Pytorch > 1.0
Opencv >= 3.4.4
pip install tqmd

2. Download datasets and pretrained models

  1. Download LFW, out-of-distribution (OOD) LFW test sets, and pretrained models: Google Drive

  2. Create the following folders:

mkdir data
mkdir pretrained
  1. Extract LFW datasets (e.g. lfw_crop_96x112.tar.gz) to data/
  2. Copy models (e.g. resnet18_110.pth) to pretrained/

3. How to run

3.1 Run examples

  • Run testing LFW images

    • -mask, -sunglass, -crop: flags for using corresponding OOD query images (i.e., faces with masks or sunglasses or randomly-cropped images).
    bash run_test.sh
    
  • Run demo: The demo gives results of top-5 images of stage 1 and stage 2 (including flow visualization of EMD).

    • -mask: image retrieval using a masked-face query image given a gallery of normal LFW images.
    • -sunglass and -crop: similar to the setup of -mask.
    • The results will be saved in the results/demo directory.
    bash run_demo.sh
    
  • Run retrieval using the full LFW gallery

    • Set the argument args.data_folder to data in .sh files.

3.2 Reproduce results

  • Make sure lfw-align-128 and lfw-align-128-crop70 dataset in data/ directory (e.g. data/lfw-align-128-crop70), ArcFace [2] model resnet18_110.pth in pretrained/ directory (e.g. pretrained/resnet18_110.pth). Run the following commands to reproduce the Table 1 results in our paper.

    • Arguments:

      • Methods can be apc, uniform, or sc
      • -l: 4 or 8 for 4x4 and 8x8 respectively.
      • -a: alpha parameter mentioned in the paper.
    • Normal LFW with 1680 classes:

    python test_face.py -method apc -fm arcface -d lfw_1680 -a -1 -data_folder data -l 4
    
    • LFW-crop:
    python test_face.py -method apc -fm arcface -d lfw -a 0.7 -data_folder data -l 4 -crop 
    
    • Note: The full LFW dataset have 5,749 people for a total of 13,233 images; however, only 1,680 people have two or more images (See LFW for details). However, in our normal LFW dataset, the identical images will not be considered in face identification. So, the difference between lfw and lfw_1680 is that the lfw setup uses the full LFW (including people with a single image) but the lfw_1680 uses only 1,680 people who have two or more images.
  • For other OOD datasets, run the following command:

    • LFW-mask:
    python test_face.py -method apc -fm arcface -d lfw -a 0.7 -data_folder data -l 4 -mask 
    
    • LFW-sunglass:
    python test_face.py -method apc -fm arcface -d lfw -a 0.7 -data_folder data -l 4 -sunglass 
    

3.3 Run visualization with two images

python visualize_faces.py -method [methods] -fm [face models] -model_path [model dir] -in1 [1st image] -in2 [2nd image] -weight [1/0: showing weight heatmaps] 

The results are in results/flow and results/heatmap (if -weight flag is on).

3.4 Use your own images

  1. Facial alignment. See align_face.py for details.
pip install scikit-image
pip install face-alignment
  • For making face alignment with size of 160x160 for Arcface (128x128) and FaceNet (160x160), the reference points are as follow (see function alignment in align_face.py).
ref_pts = [ [61.4356, 54.6963],[118.5318, 54.6963], [93.5252, 90.7366],[68.5493, 122.3655],[110.7299, 122.3641]]
crop_size = (160, 160)
  1. Create a folder including all persons (folders: name of person) and put it to '/data'
  2. Create a txt file with format: [image_path],[label] of that folder (See lfw file for details)
  3. Modify face loader: Add your txt file in function: get_face_dataloader.

4. License

MIT

5. References

  1. W. Zhao, Y. Rao, Z. Wang, J. Lu, Zhou. Towards interpretable deep metric learning with structural matching, ICCV 2021 DIML
  2. J. Deng, J. Guo, X. Niannan, and StefanosZafeiriou. Arcface: Additive angular margin loss for deepface recognition, CVPR 2019 Arcface Pytorch
  3. H. Wang, Y. Wang, Z. Zhou, X. Ji, DihongGong, J. Zhou, Z. Li, W. Liu. Cosface: Large margin cosine loss for deep face recognition, CVPR 2018 CosFace Pytorch
  4. F. Schroff, D. Kalenichenko, J. Philbin. Facenet: A unified embedding for face recognition and clustering. CVPR 2015 FaceNet Pytorch
  5. L. Weiyang, W. Yandong, Y. Zhiding, L. Ming, R. Bhiksha, S. Le. SphereFace: Deep Hypersphere Embedding for Face Recognition, CVPR 2017 sphereface, sphereface pytorch
  6. Chi Zhang, Yujun Cai, Guosheng Lin, Chunhua Shen. Deepemd: Differentiable earth mover’s distance for few-shotlearning, CVPR 2020 paper
Owner
Anh M. Nguyen
Learning in the deep...
Anh M. Nguyen
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

Emma 1 Jan 18, 2022
Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

1 Jun 14, 2022
Multi-Modal Machine Learning toolkit based on PaddlePaddle.

简体中文 | English PaddleMM 简介 飞桨多模态学习工具包 PaddleMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 PaddleMM 初始版本 v1.0 特性 丰富的任务

njustkmg 520 Dec 28, 2022
RE3: State Entropy Maximization with Random Encoders for Efficient Exploration

State Entropy Maximization with Random Encoders for Efficient Exploration (RE3) (ICML 2021) Code for State Entropy Maximization with Random Encoders f

Younggyo Seo 47 Nov 29, 2022
The official PyTorch implementation for NCSNv2 (NeurIPS 2020)

Improved Techniques for Training Score-Based Generative Models This repo contains the official implementation for the paper Improved Techniques for Tr

174 Dec 26, 2022
PyTorch implementation of Neural Dual Contouring.

NDC PyTorch implementation of Neural Dual Contouring. Citation We are still writing the paper while adding more improvements and applications. If you

Zhiqin Chen 140 Dec 26, 2022
Experiments for Fake News explainability project

fake-news-explainability Experiments for fake news explainability project This repository only contains the notebooks used to train the models and eva

Lorenzo Flores (Lj) 1 Dec 03, 2022
Yolact-keras实例分割模型在keras当中的实现

Yolact-keras实例分割模型在keras当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料 Reference 性能情况 训练数

Bubbliiiing 11 Dec 26, 2022
Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design"

Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design". CoordAttention tensorflow slim

Billy 9 Aug 22, 2022
code and models for "Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation"

Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation This repository contains code and models for the method described in: Golnaz

55 Jun 18, 2022
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022
A simple python module to generate anchor (aka default/prior) boxes for object detection tasks.

PyBx WIP A simple python module to generate anchor (aka default/prior) boxes for object detection tasks. Calculated anchor boxes are returned as ndarr

thatgeeman 4 Dec 15, 2022
This repository provides the code for MedViLL(Medical Vision Language Learner).

MedViLL This repository provides the code for MedViLL(Medical Vision Language Learner). Our proposed architecture MedViLL is a single BERT-based model

SuperSuperMoon 39 Jan 05, 2023
Improving Non-autoregressive Generation with Mixup Training

MIST Training MIST TRAIN_FILE=/your/path/to/train.json VALID_FILE=/your/path/to/valid.json OUTPUT_DIR=/your/path/to/save_checkpoints CACHE_DIR=/your/p

7 Nov 22, 2022
VOneNet: CNNs with a Primary Visual Cortex Front-End

VOneNet: CNNs with a Primary Visual Cortex Front-End A family of biologically-inspired Convolutional Neural Networks (CNNs). VOneNets have the followi

The DiCarlo Lab at MIT 99 Dec 22, 2022
sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

sequitur sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three differ

Jonathan Shobrook 305 Dec 21, 2022
Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins: Revisiting the Design of Spatial Attention in Vision Transformers Very recently, a variety of vision transformer architectures for dense predic

482 Dec 18, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

331 Dec 28, 2022