Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Related tags

Deep LearningMTAF
Overview

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Paper

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, Patrick Pérez
valeo.ai, France
IEEE International Conference on Computer Vision (ICCV), 2021 (Poster)

If you find this code useful for your research, please cite our paper:

@inproceedings{saporta2021mtaf,
  title={Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation},
  author={Saporta, Antoine and Vu, Tuan-Hung and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={ICCV},
  year={2021}
}

Abstract

In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.

Preparation

Pre-requisites

  • Python 3.7
  • Pytorch >= 0.4.1
  • CUDA 9.0 or higher

Installation

  1. Clone the repo:
$ git clone https://github.com/valeoai/MTAF
$ cd MTAF
  1. Install OpenCV if you don't already have it:
$ conda install -c menpo opencv
  1. Install NVIDIA Apex if you don't already have it: follow the instructions on: https://github.com/NVIDIA/apex

  2. Install this repository and the dependencies using pip:

$ pip install -e <root_dir>

With this, you can edit the MTAF code on the fly and import function and classes of MTAF in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall MTAF

Datasets

By default, the datasets are put in <root_dir>/data. We use symlinks to hook the MTAF codebase to the datasets. An alternative option is to explicitlly specify the parameters DATA_DIRECTORY_SOURCE and DATA_DIRECTORY_TARGET in YML configuration files.

  • GTA5: Please follow the instructions here to download images and semantic segmentation annotations. The GTA5 dataset directory should have this basic structure:
<root_dir>/data/GTA5/                               % GTA dataset root
<root_dir>/data/GTA5/images/                        % GTA images
<root_dir>/data/GTA5/labels/                        % Semantic segmentation labels
...
  • Cityscapes: Please follow the instructions in Cityscape to download the images and ground-truths. The Cityscapes dataset directory should have this basic structure:
<root_dir>/data/cityscapes/                         % Cityscapes dataset root
<root_dir>/data/cityscapes/leftImg8bit              % Cityscapes images
<root_dir>/data/cityscapes/leftImg8bit/train
<root_dir>/data/cityscapes/leftImg8bit/val
<root_dir>/data/cityscapes/gtFine                   % Semantic segmentation labels
<root_dir>/data/cityscapes/gtFine/train
<root_dir>/data/cityscapes/gtFine/val
...
  • Mapillary: Please follow the instructions in Mapillary Vistas to download the images and validation ground-truths. The Mapillary Vistas dataset directory should have this basic structure:
<root_dir>/data/mapillary/                          % Mapillary dataset root
<root_dir>/data/mapillary/train                     % Mapillary train set
<root_dir>/data/mapillary/train/images
<root_dir>/data/mapillary/validation                % Mapillary validation set
<root_dir>/data/mapillary/validation/images
<root_dir>/data/mapillary/validation/labels
...
  • IDD: Please follow the instructions in IDD to download the images and validation ground-truths. The IDD Segmentation dataset directory should have this basic structure:
<root_dir>/data/IDD/                         % IDD dataset root
<root_dir>/data/IDD/leftImg8bit              % IDD images
<root_dir>/data/IDD/leftImg8bit/train
<root_dir>/data/IDD/leftImg8bit/val
<root_dir>/data/IDD/gtFine                   % Semantic segmentation labels
<root_dir>/data/IDD/gtFine/val
...

Pre-trained models

Pre-trained models can be downloaded here and put in <root_dir>/pretrained_models

Running the code

For evaluation, execute:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt_pretrained.yml

Training

For the experiments done in the paper, we used pytorch 1.3.1 and CUDA 10.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times to reach the comparable performance.

By default, logs and snapshots are stored in <root_dir>/experiments with this structure:

<root_dir>/experiments/logs
<root_dir>/experiments/snapshots

To train the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To train the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To train the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Testing

To test the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To test the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To test the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT.

License

MTAF is released under the Apache 2.0 license.

Comments
  • question about adversarial training code in train_UDA.py

    question about adversarial training code in train_UDA.py

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation. pred_trg_main = interp_target(all_pred_trg_main[i+1]) ## what does [i+1] mean? pred_trg_main_list.append(pred_trg_main) pred_trg_target = interp_target(all_pred_trg_main[0]) ## what does [0] mean? pred_trg_target_list.append(pred_trg_target)

    In train_UDA.py, line 829-836, why should we use index[i+1] and [0]? What's the meaning of that? Also, where is the definition of the target-agnostic classifier in your code?

    Thanks again and look forward to hearing back from you!

    opened by yuzhang03 2
  • the problem for training loss

    the problem for training loss

    Thanks for enlightening work agian.

    I train the Mdis method for one source and one target, but I am confused for the loss, and I plot by tensorboard. And as I think, the adv loss should walk low and the discrimitor loss should walk higher. but in the loss below, the two losses oscillate around a number. whats wrong with it?

    Besides, I infer the training results should be better when training in manner of 1source 1target instead of 1source multi target. But in my training, I dont get good results.

    So hope your thought sincerely.

    And my training config: adv loss weight: 0.5 adv learning rate: 1e-5 seg learning rate: 1.25e-5

    adversarial loss of one source and one target
    image

    dicriminator loss of one source and one target image

    opened by slz929 2
  • problem for training data

    problem for training data

    Thanks for enlightening and practical work about multi-target DA ! I have read your paper, and I found one source dataset and 3 target datasets of unequal quantity, does the quantity of data for every domain matters? And what is the appropriate amount of training data for MTKT? Another question, I want to know why KL loss is used for knowledge transfer? If I want to train an embedding word instead of a segmentation map, is the KL loss appropriate, and is there a better alternative?

    opened by slz929 2
  • About the generation of segmentation color maps

    About the generation of segmentation color maps

    Thanks for the great research!

    I have a question though, the mIoU you report in your paper is for 7 classes, but the segmentation colour map in the qualitative analysis seems to be for the 19 classes commonly used in domain adaptive semantic segmentation.

    In other words, how can a model trained on 7 classes be used to generate a 19-class segmentation colour map? Or am I wrong in my understanding?

    I look forward to your response.

    Thank you!

    opened by liwei1101 1
  • About labels of IDD dataset

    About labels of IDD dataset

    Hello! @SportaXD Thank you for your great work!

    I was reproducing the code and noticed: the labels in the IDD dataset are in JSON file format instead of segmentation label form.

    How is this problem solved?

    opened by liwei1101 1
  • About MTKT code

    About MTKT code

    In train_UDA.py 758 line

            d_main_list[i] = d_main
            optimizer_d_main_list.append(optimizer_d_main)
            d_aux_list[i] = d_aux
            optimizer_d_aux_list.append(optimizer_d_aux)
    

    If this were done(d_main_list[i] = d_main and d_aux_list[i] = d_aux), it would make all the discriminators in the list use the same one, shouldn't there be one discriminator for each classifier?

    opened by liwei1101 1
  • About 'the multi-target baseline'

    About 'the multi-target baseline'

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation.

    d_main = get_fc_discriminator(num_classes=num_classes)
    d_main.train()
    d_main.to(device)
    d_aux = get_fc_discriminator(num_classes=num_classes)
    d_aux.train()
    d_aux.to(device)
    

    Can you tell me why the multi-domain baseline code does not use multiple discriminators but only one discriminator. It looks like a single domain approach. Thanks!

    opened by liwei1101 1
  • about eval_UDA.py

    about eval_UDA.py

    Thanks for sharing your codes.

    I was impressed with your good research.

    Could you explain why the output map is not resized for target size(cfg.TEST.OUTPUT_SIZE_TARGET) in the case of Mapillary dataset in line 57 of eval_UDA.py?

    When I tested the trained model on Mapillary dataset, inference took a long time due to the large resolution.

    I'm looking forward to hearing from you.

    Thank you!

    opened by jdg900 1
  • modifying info7class.json and train_UDA.py

    modifying info7class.json and train_UDA.py

    we have found a small bug in "./MTAF/mtaf/dataset/cityscapes_list/info7class.json". valeo

    It should be 7 Classes rather than 19 Classes in the configuration file. It appears in the Evaluation stage, where the result is printed out in the mIoU evaluation metrics and the names of the 7 classes.

    Also, there is a typo in the comments.

    opened by mohamedelmesawy 1
  • Running MTAF on a slightly different setup

    Running MTAF on a slightly different setup

    Hello, thanks for sharing the code and such a good contribution. I would like to run your method on a setup that is a bit different, specifically adapting from Cityscapes ---> BDD, Mapillary. I have seen that the code accepts Cityscapes for both source and target, so that shouldnt be a problem, and I have added a dataloader for BDD to be the target 1.

    In order to get the best performance, do I need to train the baseline and then train the method using MTKT or MDIS loading the baseline as pretrained? Or do I get the best performance directly by running the training script for MTKT or MDIS without the baseline?

    opened by fabriziojpiva 1
Owner
Valeo.ai
The GitHub account of Valeo.ai
Valeo.ai
MPLP: Metapath-Based Label Propagation for Heterogenous Graphs

MPLP: Metapath-Based Label Propagation for Heterogenous Graphs Results on MAG240M Here, we demonstrate the following performance on the MAG240M datase

Qiuying Peng 10 Jun 28, 2022
AsymmetricGAN - Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

AsymmetricGAN for Image-to-Image Translation AsymmetricGAN Framework for Multi-Domain Image-to-Image Translation AsymmetricGAN Framework for Hand Gest

Hao Tang 42 Jan 15, 2022
Deep Q-learning for playing chrome dino game

[PYTORCH] Deep Q-learning for playing Chrome Dino

Viet Nguyen 68 Dec 05, 2022
JAX + dataclasses

jax_dataclasses jax_dataclasses provides a wrapper around dataclasses.dataclass for use in JAX, which enables automatic support for: Pytree registrati

Brent Yi 35 Dec 21, 2022
A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21

ANEMONE A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21 Dependencies python==3.6.1 dgl==

Graph Analysis & Deep Learning Laboratory, GRAND 30 Dec 14, 2022
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

CFC-Net This project hosts the official implementation for the paper: CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Dete

ming71 55 Dec 12, 2022
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

4.9k Jan 03, 2023
Codebase of deep learning models for inferring stability of mRNA molecules

Kaggle OpenVaccine Models Codebase of deep learning models for inferring stability of mRNA molecules, corresponding to the Kaggle Open Vaccine Challen

Eternagame 40 Dec 29, 2022
Ludwig Benchmarking Toolkit

Ludwig Benchmarking Toolkit The Ludwig Benchmarking Toolkit is a personalized benchmarking toolkit for running end-to-end benchmark studies across an

HazyResearch 17 Nov 18, 2022
The Python ensemble sampling toolkit for affine-invariant MCMC

emcee The Python ensemble sampling toolkit for affine-invariant MCMC emcee is a stable, well tested Python implementation of the affine-invariant ense

Dan Foreman-Mackey 1.3k Dec 31, 2022
Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

IAug_CDNet Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images. Overview We propose a

53 Dec 02, 2022
Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python

Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python THIS PROJECT IS CURRENTLY A WORK IN PROGRESS AND THUS THIS REPOSITORY I

Joshua Marshall 14 Dec 31, 2022
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021
moving object detection for satellite videos.

DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos Algorithm Introduction DSFNet: Dynamic and Static Fusion Net

xiaochao 39 Dec 16, 2022
Ros2-voiceroid2 - ROS2 wrapper package of VOICEROID2

ros2_voiceroid2 ROS2 wrapper package of VOICEROID2 Windows Only Installation Ins

Nkyoku 1 Jan 23, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
PyTorch implementation of PNASNet-5 on ImageNet

PNASNet.pytorch PyTorch implementation of PNASNet-5. Specifically, PyTorch code from this repository is adapted to completely match both my implemetat

Chenxi Liu 314 Nov 25, 2022