Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

Related tags

Deep LearningSALOD
Overview

SALOD

Source code of our work: "Benchmarking Deep Models for Salient Object Detection".
In this works, we propose a new benchmark for SALient Object Detection (SALOD) methods.

We re-implement 14 methods using same settings, including input size, data loader and evaluation metrics (thanks to Metrics). Hyperparameters of optimizer are different because of various network structures and objective functions. We try our best to tune the optimizer for these models to achieve the best performance one-by-one. Some other networks are debugging now, it is welcome for your contributions on these networks to obtain better performance.

Properties

  1. A unify interface for new models. To develop a new network, you only need to 1) set configs; 2) define network; 3) define loss function. See methods/template.
  2. We build a new dataset by collecting several prevalent datasets in SOD task.
  3. Easy to adopt different backbones (Available backbones: ResNet-50, VGG-16, MobileNet-v2, EfficientNet-B0, GhostNet, Res2Net)
  4. Testing all networks on your own device. By input the name of network, you can test all available methods in our benchmark. Comparisons includes FPS, GFLOPs, model size and multiple effectiveness metrics.
  5. We implement a loss factory that you can change the loss functions using command line parameters.

Available Methods:

Methods Publish. Input Weight Optim. LR Epoch Paper Src Code
DHSNet CVPR2016 320^2 95M Adam 2e-5 30 openaccess Pytorch
NLDF CVPR2017 320^2 161M Adam 1e-5 30 openaccess Pytorch/TF
Amulet ICCV2017 320^2 312M Adam 1e-5 30 openaccess Pytorch
SRM ICCV2017 320^2 240M Adam 5e-5 30 openaccess Pytorch
PicaNet CVPR2018 320^2 464M SGD 1e-2 30 openaccess Pytorch
DSS TPAMI2019 320^2 525M Adam 2e-5 30 IEEE/ArXiv Pytorch
BASNet CVPR2019 320^2 374M Adam 1e-5 30 openaccess Pytorch
CPD CVPR2019 320^2 188M Adam 1e-5 30 openaccess Pytorch
PoolNet CVPR2019 320^2 267M Adam 5e-5 30 openaccess Pytorch
EGNet ICCV2019 320^2 437M Adam 5e-5 30 openaccess Pytorch
SCRN ICCV2019 320^2 100M SGD 1e-2 30 openaccess Pytorch
GCPA AAAI2020 320^2 263M SGD 1e-2 30 aaai.org Pytorch
ITSD CVPR2020 320^2 101M SGD 5e-3 30 openaccess Pytorch
MINet CVPR2020 320^2 635M SGD 1e-3 30 openaccess Pytorch
Tuning ----- ----- ------ ------ ----- ----- ----- -----
*PAGE CVPR2019 320^2 ------ ------ ----- ----- openaccess TF
*PFA CVPR2019 320^2 ------ ------ ----- ----- openaccess Pytorch
*F3Net AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*PFPN AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*LDF CVPR2020 320^2 ------ ------ ----- ----- openaccess Pytorch

Usage

# model_name: lower-cased method name. E.g. poolnet, egnet, gcpa, dhsnet or minet.
python3 train.py model_name --gpus=0

python3 test.py model_name --gpus=0 --weight=path_to_weight 

python3 test_fps.py model_name --gpus=0

# To evaluate generated maps:
python3 eval.py --pre_path=path_to_maps

Results

We report benchmark results here.
More results please refer to Reproduction, Few-shot and Generalization.

Notice: please contact us if you get better results.

VGG16-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 15.4 52.5 7.5 69.8 .884 .815 .812 .049 .880 .893
Amulet 33.2 1362 12.5 35.1 .855 .790 .772 .061 .854 .876
NLDF 24.6 136 9.7 46.3 .886 .824 .828 .045 .881 .898
SRM 37.9 73.1 7.9 63.1 .857 .779 .769 .060 .859 .874
PicaNet 26.3 74.2 40.5* 8.8 .889 .819 .823 .046 .884 .899
DSS 62.2 99.4 11.3 30.3 .891 .827 .826 .046 .888 .899
BASNet 80.5 114.3 16.9 32.6 .906 .853 .869 .036 .899 .915
CPD 29.2 85.9 10.5 36.3 .886 .815 .792 .052 .885 .888
PoolNet 52.5 236.2 26.4 23.1 .902 .850 .852 .039 .898 .913
EGNet 101 178.8 19.2 16.3 .909 .853 .859 .037 .904 .914
SCRN 16.3 47.2 9.3 24.8 .896 .820 .822 .046 .891 .894
GCPA 42.8 197.1 17.5 29.3 .903 .836 .845 .041 .898 .907
ITSD 16.9 76.3 15.2* 30.6 .905 .820 .834 .045 .901 .896
MINet 47.8 162 21.8 23.4 .900 .839 .852 .039 .895 .909

ResNet50-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 24.2 13.8 3.9 49.2 .909 .830 .848 .039 .905 .905
Amulet 79.8 1093.8 6.3 35.1 .895 .822 .835 .042 .894 .900
NLDF 41.1 115.1 9.2 30.5 .903 .837 .855 .038 .898 .910
SRM 61.2 20.2 5.5 34.3 .882 .803 .812 .047 .885 .891
PicaNet 106.1 36.9 18.5* 14.8 .904 .823 .843 .041 .902 .902
DSS 134.3 35.3 6.6 27.3 .894 .821 .826 .045 .893 .898
BASNet 95.5 47.2 12.2 32.8 .917 .861 .884 .032 .909 .921
CPD 47.9 14.7 7.7 22.7 .906 .842 .836 .040 .904 .908
PoolNet 68.3 66.9 10.2 33.9 .912 .843 .861 .036 .907 .912
EGNet 111.7 222.8 25.7 10.2 .917 .851 .867 .036 .912 .914
SCRN 25.2 12.5 5.5 19.3 .910 .838 .845 .040 .906 .905
GCPA 67.1 54.3 6.8 37.8 .916 .841 .866 .035 .912 .912
ITSD 25.7 19.6 5.7 29.4 .913 .825 .842 .042 .907 .899
MINet 162.4 87 11.7 23.5 .913 .851 .871 .034 .906 .917

Create New Model

To create a new model, you can copy the template folder and modify it as you want.

cp -r ./methods/template ./methods/new_name

More details please refer to python files in template floder.

Loss Factory

We supply a Loss Factory for an easier way to tune the loss functions. You can set --loss and --lw parameters to use it.

Here are some examples:

loss_dict = {'b': BCE, 's': SSIM, 'i': IOU, 'd': DICE, 'e': Edge, 'c': CTLoss}

python train.py ... --loss=bd
# loss = 1 * bce_loss + 1 * dice_loss

python train.py ... --loss=bs --lw=0.3,0.7
# loss = 0.3 * bce_loss + 0.7 * ssim_loss

python train.py ... --loss=bsid --lw=0.3,0.1,0.5,0.2
# loss = 0.3 * bce_loss + 0.1 * ssim_loss + 0.5 * iou_loss + 0.2 * dice_loss
Mall-Customers-Segmentation - Customer Segmentation Using K-Means Clustering

Overview Customer Segmentation is one the most important applications of unsupervised learning. Using clustering techniques, companies can identify th

NelakurthiSudheer 2 Jan 03, 2022
QueryFuzz implements a metamorphic testing approach to test Datalog engines.

Datalog is a popular query language with applications in several domains. Like any complex piece of software, Datalog engines may contain bugs. The mo

34 Sep 10, 2022
Extreme Dynamic Classifier Chains - XGBoost for Multi-label Classification

Extreme Dynamic Classifier Chains Classifier chains is a key technique in multi-label classification, sinceit allows to consider label dependencies ef

6 Oct 08, 2022
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
RealTime Emotion Recognizer for Machine Learning Study Jam's demo

Emotion recognizer Table of contents Clone project Dataset Install dependencies Main program Demo 1. Clone project git clone https://github.com/GDSC20

Google Developer Student Club - UIT 1 Oct 05, 2021
Cache Requests in Deta Bases and Echo them with Deta Micros

Deta Echo Cache Leverage the awesome Deta Micros and Deta Base to cache requests and echo them as needed. Stop worrying about slow public APIs or agre

Gingerbreadfork 8 Dec 07, 2021
Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Utkarsh Agiwal 1 Feb 03, 2022
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]

SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021] Pdf: https://openreview.net/forum?id=v5gjXpmR8J Code for our ICLR 2021 pape

Princeton INSPIRE Research Group 113 Nov 27, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

59 Feb 25, 2022
Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

1 Jun 14, 2022
face_recognization (FaceNet) + TFHE (HNP) + hand_face_detection (Mediapipe)

SuperControlSystem Face_Recognization (FaceNet) 面部识别 (FaceNet) Fully Homomorphic Encryption over the Torus (HNP) 环面全同态加密 (TFHE) Hand_Face_Detection (M

liziyu0104 2 Dec 30, 2021
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Bobo Xi 7 Nov 03, 2022
TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffic Environments for IV 2022.

TorchGRL TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffi

XXQQ 42 Dec 09, 2022
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper] This is Official PyTorch implementatio

42 Nov 01, 2022
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

SJTU-ViSYS 112 Nov 28, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021