Reviving Iterative Training with Mask Guidance for Interactive Segmentation

Overview

Reviving Iterative Training with Mask Guidance for Interactive Segmentation

Open In Colab The MIT License

drawing drawing

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation of the following paper:

Reviving Iterative Training with Mask Guidance for Interactive Segmentation
Konstantin Sofiiuk, Ilia Petrov, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2102.06583

Abstract: Recent works on click-based interactive segmentation have demonstrated state-of-the-art results by using various inference-time optimization schemes. These methods are considerably more computationally expensive compared to feedforward approaches, as they require performing backward passes through a network during inference and are hard to deploy on mobile frameworks that usually support only forward passes. In this paper, we extensively evaluate various design choices for interactive segmentation and discover that new state-of-the-art results can be obtained without any additional optimization schemes. Thus, we propose a simple feedforward model for click-based interactive segmentation that employs the segmentation masks from previous steps. It allows not only to segment an entirely new object, but also to start with an external mask and correct it. When analyzing the performance of models trained on different datasets, we observe that the choice of a training dataset greatly impacts the quality of interactive segmentation. We find that the models trained on a combination of COCO and LVIS with diverse and high-quality annotations show performance superior to all existing models.

Setting up an environment

This framework is built using Python 3.6 and relies on the PyTorch 1.4.0+. The following command installs all necessary packages:

pip3 install -r requirements.txt

You can also use our Dockerfile to build a container with the configured environment.

If you want to run training or testing, you must configure the paths to the datasets in config.yml.

Interactive Segmentation Demo

drawing

The GUI is based on TkInter library and its Python bindings. You can try our interactive demo with any of the provided models. Our scripts automatically detect the architecture of the loaded model, just specify the path to the corresponding checkpoint.

Examples of the script usage:

# This command runs interactive demo with HRNet18 ITER-M model from cfg.INTERACTIVE_MODELS_PATH on GPU with id=0
# --checkpoint can be relative to cfg.INTERACTIVE_MODELS_PATH or absolute path to the checkpoint
python3 demo.py --checkpoint=hrnet18_cocolvis_itermask_3p --gpu=0

# This command runs interactive demo with HRNet18 ITER-M model from /home/demo/isegm/weights/
# If you also do not have a lot of GPU memory, you can reduce --limit-longest-size (default=800)
python3 demo.py --checkpoint=/home/demo/fBRS/weights/hrnet18_cocolvis_itermask_3p --limit-longest-size=400

# You can try the demo in CPU only mode
python3 demo.py --checkpoint=hrnet18_cocolvis_itermask_3p --cpu
Running demo in docker
# activate xhost
xhost +
docker run -v "$PWD":/tmp/ \
           -v /tmp/.X11-unix:/tmp/.X11-unix \
           -e DISPLAY=$DISPLAY  \
           python3 demo.py --checkpoint resnet34_dh128_sbd --cpu

Controls:

Key Description
Left Mouse Button Place a positive click
Right Mouse Button Place a negative click
Scroll Wheel Zoom an image in and out
Right Mouse Button +
Move Mouse
Move an image
Space Finish the current object mask
Initializing the ITER-M models with an external segmentation mask

drawing

According to our paper, ITER-M models take an image, encoded user input, and a previous step mask as their input. Moreover, a user can initialize the model with an external mask before placing any clicks and correct this mask using the same interface. As it turns out, our models successfully handle this situation and make it possible to change the mask.

To initialize any ITER-M model with an external mask use the "Load mask" button in the menu bar.

Interactive segmentation options
  • ZoomIn (can be turned on/off using the checkbox)
    • Skip clicks - the number of clicks to skip before using ZoomIn.
    • Target size - ZoomIn crop is resized so its longer side matches this value (increase for large objects).
    • Expand ratio - object bbox is rescaled with this ratio before crop.
    • Fixed crop - ZoomIn crop is resized to (Target size, Target size).
  • BRS parameters (BRS type can be changed using the dropdown menu)
    • Network clicks - the number of first clicks that are included in the network's input. Subsequent clicks are processed only using BRS (NoBRS ignores this option).
    • L-BFGS-B max iterations - the maximum number of function evaluation for each step of optimization in BRS (increase for better accuracy and longer computational time for each click).
  • Visualisation parameters
    • Prediction threshold slider adjusts the threshold for binarization of probability map for the current object.
    • Alpha blending coefficient slider adjusts the intensity of all predicted masks.
    • Visualisation click radius slider adjusts the size of red and green dots depicting clicks.

Datasets

We train all our models on SBD and COCO+LVIS and evaluate them on GrabCut, Berkeley, DAVIS, SBD and PascalVOC. We also provide links to additional datasets: ADE20k and OpenImages, that are used in ablation study.

Dataset Description Download Link
ADE20k 22k images with 434k instances (total) official site
OpenImages 944k images with 2.6M instances (total) official site
MS COCO 118k images with 1.2M instances (train) official site
LVIS v1.0 100k images with 1.2M instances (total) official site
COCO+LVIS* 99k images with 1.5M instances (train) original LVIS images +
our combined annotations
SBD 8498 images with 20172 instances for (train)
2857 images with 6671 instances for (test)
official site
Grab Cut 50 images with one object each (test) GrabCut.zip (11 MB)
Berkeley 96 images with 100 instances (test) Berkeley.zip (7 MB)
DAVIS 345 images with one object each (test) DAVIS.zip (43 MB)
Pascal VOC 1449 images with 3417 instances (validation) official site
COCO_MVal 800 images with 800 instances (test) COCO_MVal.zip (127 MB)

Don't forget to change the paths to the datasets in config.yml after downloading and unpacking.

(*) To prepare COCO+LVIS, you need to download original LVIS v1.0, then download and unpack our pre-processed annotations that are obtained by combining COCO and LVIS dataset into the folder with LVIS v1.0.

Testing

Pretrained models

We provide pretrained models with different backbones for interactive segmentation.

You can find model weights and evaluation results in the tables below:

Train
Dataset
Model GrabCut Berkeley SBD DAVIS Pascal
VOC
COCO
MVal
NoC
85%
NoC
90%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%
SBD HRNet18 IT-M
(38.8 MB)
1.76 2.04 3.22 3.39 5.43 4.94 6.71 2.51 4.39
COCO+
LVIS
HRNet18
(38.8 MB)
1.54 1.70 2.48 4.26 6.86 4.79 6.00 2.59 3.58
HRNet18s IT-M
(16.5 MB)
1.54 1.68 2.60 4.04 6.48 4.70 5.98 2.57 3.33
HRNet18 IT-M
(38.8 MB)
1.42 1.54 2.26 3.80 6.06 4.36 5.74 2.28 2.98
HRNet32 IT-M
(119 MB)
1.46 1.56 2.10 3.59 5.71 4.11 5.34 2.57 2.97

Evaluation

We provide the script to test all the presented models in all possible configurations on GrabCut, Berkeley, DAVIS, Pascal VOC, and SBD. To test a model, you should download its weights and put them in ./weights folder (you can change this path in the config.yml, see INTERACTIVE_MODELS_PATH variable). To test any of our models, just specify the path to the corresponding checkpoint. Our scripts automatically detect the architecture of the loaded model.

The following command runs the NoC evaluation on all test datasets (other options are displayed using '-h'):

python3 scripts/evaluate_model.py <brs-mode> --checkpoint=<checkpoint-name>

Examples of the script usage:

# This command evaluates HRNetV2-W18-C+OCR ITER-M model in NoBRS mode on all Datasets.
python3 scripts/evaluate_model.py NoBRS --checkpoint=hrnet18_cocolvis_itermask_3p

# This command evaluates HRNet-W18-C-Small-v2+OCR ITER-M model in f-BRS-B mode on all Datasets.
python3 scripts/evaluate_model.py f-BRS-B --checkpoint=hrnet18s_cocolvis_itermask_3p

# This command evaluates HRNetV2-W18-C+OCR ITER-M model in NoBRS mode on GrabCut and Berkeley datasets.
python3 scripts/evaluate_model.py NoBRS --checkpoint=hrnet18_cocolvis_itermask_3p --datasets=GrabCut,Berkeley

Jupyter notebook

You can also interactively experiment with our models using test_any_model.ipynb Jupyter notebook.

Training

We provide the scripts for training our models on the SBD dataset. You can start training with the following commands:

# ResNet-34 non-iterative baseline model
python3 train.py models/noniterative_baselines/r34_dh128_cocolvis.py --gpus=0 --workers=4 --exp-name=first-try

# HRNet-W18-C-Small-v2+OCR ITER-M model
python3 train.py models/iter_mask/hrnet18s_cocolvis_itermask_3p.py --gpus=0 --workers=4 --exp-name=first-try

# HRNetV2-W18-C+OCR ITER-M model
python3 train.py models/iter_mask/hrnet18_cocolvis_itermask_3p.py --gpus=0,1 --workers=6 --exp-name=first-try

# HRNetV2-W32-C+OCR ITER-M model
python3 train.py models/iter_mask/hrnet32_cocolvis_itermask_3p.py --gpus=0,1,2,3 --workers=12 --exp-name=first-try

For each experiment, a separate folder is created in the ./experiments with Tensorboard logs, text logs, visualization and checkpoints. You can specify another path in the config.yml (see EXPS_PATH variable).

Please note that we trained ResNet-34 and HRNet-18s on 1 GPU, HRNet-18 on 2 GPUs, HRNet-32 on 4 GPUs (we used Nvidia Tesla P40 for training). To train on a different GPU you should adjust the batch size using the command line argument --batch-size or change the default value in a model script.

We used the pre-trained HRNetV2 models from the official repository. If you want to train interactive segmentation with these models, you need to download the weights and specify the paths to them in config.yml.

License

The code is released under the MIT License. It is a short, permissive software license. Basically, you can do whatever you want as long as you include the original copyright and license notice in any copy of the software/source.

Citation

If you find this work is useful for your research, please cite our papers:

@article{reviving2021,
  title={Reviving Iterative Training with Mask Guidance for Interactive Segmentation},
  author={Konstantin Sofiiuk, Ilia Petrov, Anton Konushin},
  journal={arXiv preprint arXiv:2102.06583},
  year={2021}
}

@article{fbrs2020,
  title={f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation},
  author={Konstantin Sofiiuk, Ilia Petrov, Olga Barinova, Anton Konushin},
  journal={arXiv preprint arXiv:2001.10331},
  year={2020}
}
Comments
  • Training time

    Training time

    Hello,

    Great work and GitHub repository! I'm curious about the duration needed to complete training, was it a matter of days or weeks (for the Nvidia Tesla P40 GPUs you used)?

    Best!

    opened by UndecidedBoy 2
  • Little confusing on a result

    Little confusing on a result

    Happy to read your great work again.

    I am confusing about the result in Table 1 and Table 3. In Table 1, it shows that HRNet-18(Conv1S, Disk5) achieve 6.90 on DAVIS. In the first part of ablation studies(section 5.2), you claim that you use HRNet with the Conv1S input scheme and encode clicks with disks of radius 5 in all other experiments. But in Table 3, the DAVIS result of HRNet-18 trained on SBD is 7.17. I don't understand the difference between them.

    opened by kleinzcy 2
  • Using prior masks

    Using prior masks

    Hello!

    Thank you for the great work. I was wondering if you could:

    1. Point me to where in your code you utilize prior masks, I'm having trouble locating it.
    2. If it's not obvious in the code describe how you go about using past model predictions? Do you precompute these or are they thresholded and saved?

    Thank you!

    opened by joshmyersdean 1
  • Resolution of image

    Resolution of image

    Hi! Thanks for sharing amazing work. I have noticed that while using pre-trained model, it is not working as fine as your demo file works on high resolution images. But for normal resolution images it works as equally as shared demo file. Can you please help me.

    opened by iresusharma 1
  • use ade20k dataset

    use ade20k dataset

    Thanks for your job. I'm trying to train a model with ade20k, but I can't find the “annotations-object-segmentation.pkl” in the ade20k.py, line 69. Can you offer me this pkl file? thank you

    opened by Robert-Hopkins 0
  • Question: Providing points from external source - without mouse

    Question: Providing points from external source - without mouse

    Hi @ksofiyuk

    I am planning on extending your framework by allowing the definition of points via a different input source (e.g., spatial mouse) that will not have a direct connection to the PC where RITM is running. Could you please point me out in the code where you sample the clicks so I can analyze how I can put my own input instead of clicks? Thanks a lot for the help!

    Best, Matteo

    opened by matteopantano 0
  • Web Demo

    Web Demo

    Thank you for your contribution to the vision community. I just want to ask, how can I use your model for the web server? Do you have any web-based demos available? How can I use it for multi-class segmentation? I am looking for your kind response.

    opened by Ehteshamciitwah 0
  • a question about interactive segmentation

    a question about interactive segmentation

    Dear sofiiuk: We are very interested in your interactive segmentation method, but we have a question need your help. After we test your code with python, we find that this method can only segment one class type, can you give us some advise on multi-class segmentation? for example, when the user click different object, it can show its class type, such apple, car, and so on.

        Your Sincerely,
        Wang Zhipan,
    

    Sun-Yat sen university

    opened by wzp8023391 0
  • AttributeError: module 'albumentations.augmentations.functional' has no attribute 'resize'

    AttributeError: module 'albumentations.augmentations.functional' has no attribute 'resize'

    I've met this problem when trying to train the model. AttributeError: module 'albumentations.augmentations.functional' has no attribute 'resize' my albumentations's version is 1.2.1 image I‘ve actually check the functional.py in albumentations, there is no resize function....T.T what can I do to solce this ? QAQ

    opened by ermu2001 1
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
VOGUE: Try-On by StyleGAN Interpolation Optimization

VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples.

Wei ZHANG 66 Dec 09, 2022
A Python 3 package for state-of-the-art statistical dimension reduction methods

direpack: a Python 3 library for state-of-the-art statistical dimension reduction techniques This package delivers a scikit-learn compatible Python 3

Sven Serneels 32 Dec 14, 2022
Official PyTorch Implementation of SSMix (Findings of ACL 2021)

SSMix: Saliency-based Span Mixup for Text Classification (Findings of ACL 2021) Official PyTorch Implementation of SSMix | Paper Abstract Data augment

Clova AI Research 52 Dec 27, 2022
Self-supervised learning (SSL) is a method of machine learning

Self-supervised learning (SSL) is a method of machine learning. It learns from unlabeled sample data. It can be regarded as an intermediate form between supervised and unsupervised learning.

Ashish Patel 4 May 26, 2022
Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Soubhik Sanyal 689 Dec 25, 2022
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
Learning a mapping from images to psychological similarity spaces with neural networks.

LearningPsychologicalSpaces v0.1: v1.1: v1.2: v1.3: v1.4: v1.5: The code in this repository explores learning a mapping from images to psychological s

Lucas Bechberger 8 Dec 12, 2022
Bag of Tricks for Natural Policy Gradient Reinforcement Learning

Bag of Tricks for Natural Policy Gradient Reinforcement Learning [ArXiv] Setup Python 3.8.0 pip install -r req.txt Mujoco 200 license Main Files main.

Brennan Gebotys 1 Oct 10, 2022
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
Point Cloud Registration Network

PCRNet: Point Cloud Registration Network using PointNet Encoding Source Code Author: Vinit Sarode and Xueqian Li Paper | Website | Video | Pytorch Imp

ViNiT SaRoDe 59 Nov 19, 2022
Official PaddlePaddle implementation of Paint Transformer

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Paddle Implementation] Update We have optimized the serial inference p

TianweiLin 284 Dec 31, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
Official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION.

IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION This is the official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSU

电线杆 14 Dec 15, 2022
These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations"

Few-shot-NLEs These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations". You can find the smal

Yordan Yordanov 0 Oct 21, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
codebase for "A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks"

Eigenlearning This repo contains code for replicating the experiments of the paper A Theory of the Inductive Bias and Generalization of Kernel Regress

Jamie Simon 45 Dec 02, 2022
A Pytorch implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU_pytorch A Pytorch Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/ab

Fuhang 36 Dec 24, 2022