Official repository for Natural Image Matting via Guided Contextual Attention

Overview

GCA-Matting: Natural Image Matting via Guided Contextual Attention

The source codes and models of Natural Image Matting via Guided Contextual Attention which will appear in AAAI-20.

Matting results on test data from alphamatting.com with trimap-user.

Requirements

Packages:

  • torch >= 1.1
  • tensorboardX
  • numpy
  • opencv-python
  • toml
  • easydict
  • pprint

GPU memory >= 8GB for inference on Adobe Composition-1K testing set

Models

The models pretrained on Adobe Image Matting Dataset are covered by Adobe Deep Image Mattng Dataset License Agreement and can only be used and distributed for noncommercial purposes.

Model Name Training Data File Size MSE SAD Grad Conn
ResNet34_En_nomixup ISLVRC 2012 166 MB N/A N/A N/A N/A
gca-dist Adobe Matting Dataset 96.5 MB 0.0091 35.28 16.92 32.53
gca-dist-all-data Adobe Matting Dataset
+ Composition-1K
96.5 MB - - - -
  • ResNet34_En_nomixup: Model of the customized ResNet-34 backbone trained on ImageNet. Save to ./pretrain/. The training codes of ResNet34_En_nomixup and more variants will be released as an independent repository later. You need this checkpoint only if you want to train your own matting model.
  • gca-dist: Model of the GCA Matting in Table 2 in the paper. Save to ./checkpoints/gca-dist/.
  • gca-dist-all-data: Model of the GCA Matting trained on both Adobe Image Matting Dataset and the Composition-1K testing set for alphamatting.com online benchmark. Save to ./checkpoints/gca-dist-all-data/.

(We removed optimizer state_dict from gca-dist.pth and gca-dist-all-data.pth to save space. So you cannot resume the training from these two models.)

Run a Demo on alphamatting.com Testing Set

python demo.py \
--config=config/gca-dist-all-data.toml \
--checkpoint=checkpoints/gca-dist-all-data/gca-dist-all-data.pth \
--image-dir=demo/input_lowres \
--trimap-dir=demo/trimap_lowres/Trimap3 \
--output=demo/pred/Trimap3/

This will load the configuration from config and save predictions in output/config_checkpoint/*. You can reproduce our alphamatting.com submission by this command.

Train and Evaluate on Adobe Image Matting Dataset

Data Preparation

Since each ground truth alpha image in Composition-1K is shared by 20 merged images, we first copy and rename these alpha images to have the same name as their trimaps. If your ground truth images are in ./Combined_Dataset/Test_set/Adobe-licensed images/alpha, run following command:

./copy_testing_alpha.sh Combined_Dataset/Test_set/Adobe-licensed\ images

New alpha images will be generated in Combined_Dataset/Test_set/Adobe-licensed images/alpha_copy

Configuration

TOML files are used as configurations in ./config/. You can find the definition and options in ./utils/config.py.

Training

Default training requires 4 GPUs with 11GB memory, and the batch size is 10 for each GPU. First, you need to set your training and validation data path in configuration and dataloader will merge training images on-the-fly:

[data]
train_fg = ""
train_alpha = ""
train_bg = ""
test_merged = ""
test_alpha = ""
test_trimap = ""

You can train the model by

./train.sh

or

OMP_NUM_THREADS=2 python -m torch.distributed.launch \
--nproc_per_node=4 main.py \
--config=config/gca-dist.toml

For single GPU training, set dist=false in your *.toml and run

python main.py --config=config/*.toml

Evaluation

To evaluate our model or your own model on Composition-1K, set the path of Composition-1K testing and model name in the configuration file *.toml:

[test]
merged = "./data/test/merged"
alpha = "./data/test/alpha_copy"
trimap = "./data/test/trimap"
# this will load ./checkpoint/*/gca-dist.pth
checkpoint = "gca-dist" 

and run the command:

./test.sh

or

python main.py \
--config=config/gca-dist.toml \
--phase=test

The predictions will be save to** ./prediction by default, and you can evaluate the results by the MATLAB file ./DIM_evaluation_code/evaluate.m in which the evaluate functions are provided by Deep Image Matting. Please do not report the quantitative results calculated by our python code like ./utils/evaluate.py or this test.sh in your paper or project. The Grad and Conn functions of our reimplementation are not exactly the same as MATLAB version.

Citation

If you find this work or code useful for your research, please cite:

@inproceedings{li2020natural,
  title={Natural image matting via guided contextual attention},
  author={Li, Yaoyi and Lu, Hongtao},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  pages={11450--11457},
  year={2020}
}
Owner
Li Yaoyi
Li Yaoyi
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 03, 2023
A python library to artfully visualize Factorio Blueprints and an interactive web demo for using it.

Factorio Blueprint Visualizer I love the game Factorio and I really like the look of factories after growing for many hours or blueprints after tweaki

Piet Brömmel 124 Jan 07, 2023
Tesla Light Show xLights Guide With python

Tesla Light Show xLights Guide Welcome to the Tesla Light Show xLights guide! You can create and run your own light shows on Tesla vehicles. Running a

Tesla, Inc. 2.5k Dec 29, 2022
Learning Dense Representations of Phrases at Scale (Lee et al., 2020)

DensePhrases DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches th

Princeton Natural Language Processing 540 Dec 30, 2022
Codebase for the Summary Loop paper at ACL2020

Summary Loop This repository contains the code for ACL2020 paper: The Summary Loop: Learning to Write Abstractive Summaries Without Examples. Training

Canny Lab @ The University of California, Berkeley 44 Nov 04, 2022
Global-Local Attention for Emotion Recognition

Global-Local Attention for Emotion Recognition Requirements Python 3 Install tensorflow (or tensorflow-gpu) = 2.0.0 Install some other packages pip i

Minh Nhat Le 15 Apr 21, 2022
An API-first distributed deployment system of deep learning models using timeseries data to analyze and predict systems behaviour

Gordo Building thousands of models with timeseries data to monitor systems. Table of content About Examples Install Uninstall Developer manual How to

Equinor 26 Dec 27, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637

This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637 Dependencies The model depends on the foll

Jörg Encke 2 Oct 14, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

25 Nov 09, 2022
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

zhanglabNKU 1 Nov 29, 2022
Pytorch implementation of MalConv

MalConv-Pytorch A Pytorch implementation of MalConv Desciprtion This is the implementation of MalConv proposed in Malware Detection by Eating a Whole

Alexander H. Liu 58 Oct 26, 2022
Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR 2019 (Oral)

Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations The code of: Weakly Supervised Learning of Instance Segmentation with I

Jiwoon Ahn 472 Dec 29, 2022
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023
Sentiment analysis translations of the Bhagavad Gita

Sentiment and Semantic Analysis of Bhagavad Gita Translations It is well known that translations of songs and poems not only breaks rhythm and rhyming

Machine learning and Bayesian inference @ UNSW Sydney 3 Aug 01, 2022
Data Engineering ZoomCamp

Data Engineering ZoomCamp I'm partaking in a Data Engineering Bootcamp / Zoomcamp and will be tracking my progress here. I can't promise these notes w

Aaron 61 Jan 06, 2023
Tensorflow AffordanceNet and AffContext implementations

AffordanceNet and AffContext This is tensorflow AffordanceNet and AffContext implementations. Both are implemented and tested with tensorflow 2.3. The

Beatriz Pérez 6 Dec 01, 2022
Convex optimization for fun and profit.

CFMM Optimal Routing This repository contains the code needed to generate the figures used in the paper Optimal Routing for Constant Function Market M

Guillermo Angeris 183 Dec 29, 2022
Betafold - AlphaFold with tunings

BetaFold We (hegelab.org) craeted this standalone AlphaFold (AlphaFold-Multimer,

2 Aug 11, 2022