Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

Related tags

Deep LearningSALOD
Overview

SALOD

Source code of our work: "Benchmarking Deep Models for Salient Object Detection".
In this works, we propose a new benchmark for SALient Object Detection (SALOD) methods.

We re-implement 14 methods using same settings, including input size, data loader and evaluation metrics (thanks to Metrics). Hyperparameters of optimizer are different because of various network structures and objective functions. We try our best to tune the optimizer for these models to achieve the best performance one-by-one. Some other networks are debugging now, it is welcome for your contributions on these networks to obtain better performance.

Properties

  1. A unify interface for new models. To develop a new network, you only need to 1) set configs; 2) define network; 3) define loss function. See methods/template.
  2. We build a new dataset by collecting several prevalent datasets in SOD task.
  3. Easy to adopt different backbones (Available backbones: ResNet-50, VGG-16, MobileNet-v2, EfficientNet-B0, GhostNet, Res2Net)
  4. Testing all networks on your own device. By input the name of network, you can test all available methods in our benchmark. Comparisons includes FPS, GFLOPs, model size and multiple effectiveness metrics.
  5. We implement a loss factory that you can change the loss functions using command line parameters.

Available Methods:

Methods Publish. Input Weight Optim. LR Epoch Paper Src Code
DHSNet CVPR2016 320^2 95M Adam 2e-5 30 openaccess Pytorch
NLDF CVPR2017 320^2 161M Adam 1e-5 30 openaccess Pytorch/TF
Amulet ICCV2017 320^2 312M Adam 1e-5 30 openaccess Pytorch
SRM ICCV2017 320^2 240M Adam 5e-5 30 openaccess Pytorch
PicaNet CVPR2018 320^2 464M SGD 1e-2 30 openaccess Pytorch
DSS TPAMI2019 320^2 525M Adam 2e-5 30 IEEE/ArXiv Pytorch
BASNet CVPR2019 320^2 374M Adam 1e-5 30 openaccess Pytorch
CPD CVPR2019 320^2 188M Adam 1e-5 30 openaccess Pytorch
PoolNet CVPR2019 320^2 267M Adam 5e-5 30 openaccess Pytorch
EGNet ICCV2019 320^2 437M Adam 5e-5 30 openaccess Pytorch
SCRN ICCV2019 320^2 100M SGD 1e-2 30 openaccess Pytorch
GCPA AAAI2020 320^2 263M SGD 1e-2 30 aaai.org Pytorch
ITSD CVPR2020 320^2 101M SGD 5e-3 30 openaccess Pytorch
MINet CVPR2020 320^2 635M SGD 1e-3 30 openaccess Pytorch
Tuning ----- ----- ------ ------ ----- ----- ----- -----
*PAGE CVPR2019 320^2 ------ ------ ----- ----- openaccess TF
*PFA CVPR2019 320^2 ------ ------ ----- ----- openaccess Pytorch
*F3Net AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*PFPN AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*LDF CVPR2020 320^2 ------ ------ ----- ----- openaccess Pytorch

Usage

# model_name: lower-cased method name. E.g. poolnet, egnet, gcpa, dhsnet or minet.
python3 train.py model_name --gpus=0

python3 test.py model_name --gpus=0 --weight=path_to_weight 

python3 test_fps.py model_name --gpus=0

# To evaluate generated maps:
python3 eval.py --pre_path=path_to_maps

Results

We report benchmark results here.
More results please refer to Reproduction, Few-shot and Generalization.

Notice: please contact us if you get better results.

VGG16-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 15.4 52.5 7.5 69.8 .884 .815 .812 .049 .880 .893
Amulet 33.2 1362 12.5 35.1 .855 .790 .772 .061 .854 .876
NLDF 24.6 136 9.7 46.3 .886 .824 .828 .045 .881 .898
SRM 37.9 73.1 7.9 63.1 .857 .779 .769 .060 .859 .874
PicaNet 26.3 74.2 40.5* 8.8 .889 .819 .823 .046 .884 .899
DSS 62.2 99.4 11.3 30.3 .891 .827 .826 .046 .888 .899
BASNet 80.5 114.3 16.9 32.6 .906 .853 .869 .036 .899 .915
CPD 29.2 85.9 10.5 36.3 .886 .815 .792 .052 .885 .888
PoolNet 52.5 236.2 26.4 23.1 .902 .850 .852 .039 .898 .913
EGNet 101 178.8 19.2 16.3 .909 .853 .859 .037 .904 .914
SCRN 16.3 47.2 9.3 24.8 .896 .820 .822 .046 .891 .894
GCPA 42.8 197.1 17.5 29.3 .903 .836 .845 .041 .898 .907
ITSD 16.9 76.3 15.2* 30.6 .905 .820 .834 .045 .901 .896
MINet 47.8 162 21.8 23.4 .900 .839 .852 .039 .895 .909

ResNet50-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 24.2 13.8 3.9 49.2 .909 .830 .848 .039 .905 .905
Amulet 79.8 1093.8 6.3 35.1 .895 .822 .835 .042 .894 .900
NLDF 41.1 115.1 9.2 30.5 .903 .837 .855 .038 .898 .910
SRM 61.2 20.2 5.5 34.3 .882 .803 .812 .047 .885 .891
PicaNet 106.1 36.9 18.5* 14.8 .904 .823 .843 .041 .902 .902
DSS 134.3 35.3 6.6 27.3 .894 .821 .826 .045 .893 .898
BASNet 95.5 47.2 12.2 32.8 .917 .861 .884 .032 .909 .921
CPD 47.9 14.7 7.7 22.7 .906 .842 .836 .040 .904 .908
PoolNet 68.3 66.9 10.2 33.9 .912 .843 .861 .036 .907 .912
EGNet 111.7 222.8 25.7 10.2 .917 .851 .867 .036 .912 .914
SCRN 25.2 12.5 5.5 19.3 .910 .838 .845 .040 .906 .905
GCPA 67.1 54.3 6.8 37.8 .916 .841 .866 .035 .912 .912
ITSD 25.7 19.6 5.7 29.4 .913 .825 .842 .042 .907 .899
MINet 162.4 87 11.7 23.5 .913 .851 .871 .034 .906 .917

Create New Model

To create a new model, you can copy the template folder and modify it as you want.

cp -r ./methods/template ./methods/new_name

More details please refer to python files in template floder.

Loss Factory

We supply a Loss Factory for an easier way to tune the loss functions. You can set --loss and --lw parameters to use it.

Here are some examples:

loss_dict = {'b': BCE, 's': SSIM, 'i': IOU, 'd': DICE, 'e': Edge, 'c': CTLoss}

python train.py ... --loss=bd
# loss = 1 * bce_loss + 1 * dice_loss

python train.py ... --loss=bs --lw=0.3,0.7
# loss = 0.3 * bce_loss + 0.7 * ssim_loss

python train.py ... --loss=bsid --lw=0.3,0.1,0.5,0.2
# loss = 0.3 * bce_loss + 0.1 * ssim_loss + 0.5 * iou_loss + 0.2 * dice_loss
A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking PoseRBPF Paper Self-supervision Paper Pose Estimation Video Robot Manipulati

NVIDIA Research Projects 107 Dec 25, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
Our solution for SSN Invente 2021's Hackathon

Our solution for SSN Invente 2021's Hackathon. To help maitain godowns in a pristine and safe condition using raspberry pi.

1 Jan 12, 2022
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Website | ICCV paper | arXiv | Twitter This repository contains the official i

Ajay Jain 73 Dec 27, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction

FaceExtraction FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction Occlusions often occur in face images in the wild, tr

16 Dec 14, 2022
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Sungyeon Kim 37 Dec 06, 2022
GANTheftAuto is a fork of the Nvidia's GameGAN

Description GANTheftAuto is a fork of the Nvidia's GameGAN, which is research focused on emulating dynamic game environments. The early research done

Harrison 801 Dec 27, 2022
MediaPipe Kullanarak İleri Seviye Bilgisayarla Görü

MediaPipe Kullanarak İleri Seviye Bilgisayarla Görü

Burak Bagatarhan 12 Mar 29, 2022
Code for: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification Prerequisite PyTorch = 1.2.0 Python3 torch

16 Dec 14, 2022
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)

ASGNet The code is for the paper "Adaptive Prototype Learning and Allocation for Few-Shot Segmentation" (accepted to CVPR 2021) [arxiv] Overview data/

Gen Li 91 Dec 23, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

124 Dec 27, 2022
This repository contains implementations and illustrative code to accompany DeepMind publications

DeepMind Research This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to a

DeepMind 11.3k Dec 31, 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning

SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning This repository is the official implementation of "SHRIMP: Sparser Random Featur

Bobby Shi 0 Dec 16, 2021
Tutorial: Introduction to Graph Machine Learning, with Jupyter notebooks

GraphMLTutorialNLDL22 Tutorial NLDL22: Introduction to Graph Machine Learning, with Jupyter notebooks This tutorial takes place during the conference

UiT Machine Learning Group 3 Jan 10, 2022
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation

Attention Gated Networks (Image Classification & Segmentation) Pytorch implementation of attention gates used in U-Net and VGG-16 models. The framewor

Ozan Oktay 1.6k Dec 30, 2022
Federated Learning Based on Dynamic Regularization

Federated Learning Based on Dynamic Regularization This is implementation of Federated Learning Based on Dynamic Regularization. Requirements Please i

39 Jan 07, 2023
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

ZhouYanzhao 217 Dec 12, 2022