Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Related tags

Deep LearningMTAF
Overview

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Paper

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, Patrick Pérez
valeo.ai, France
IEEE International Conference on Computer Vision (ICCV), 2021 (Poster)

If you find this code useful for your research, please cite our paper:

@inproceedings{saporta2021mtaf,
  title={Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation},
  author={Saporta, Antoine and Vu, Tuan-Hung and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={ICCV},
  year={2021}
}

Abstract

In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.

Preparation

Pre-requisites

  • Python 3.7
  • Pytorch >= 0.4.1
  • CUDA 9.0 or higher

Installation

  1. Clone the repo:
$ git clone https://github.com/valeoai/MTAF
$ cd MTAF
  1. Install OpenCV if you don't already have it:
$ conda install -c menpo opencv
  1. Install NVIDIA Apex if you don't already have it: follow the instructions on: https://github.com/NVIDIA/apex

  2. Install this repository and the dependencies using pip:

$ pip install -e <root_dir>

With this, you can edit the MTAF code on the fly and import function and classes of MTAF in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall MTAF

Datasets

By default, the datasets are put in <root_dir>/data. We use symlinks to hook the MTAF codebase to the datasets. An alternative option is to explicitlly specify the parameters DATA_DIRECTORY_SOURCE and DATA_DIRECTORY_TARGET in YML configuration files.

  • GTA5: Please follow the instructions here to download images and semantic segmentation annotations. The GTA5 dataset directory should have this basic structure:
<root_dir>/data/GTA5/                               % GTA dataset root
<root_dir>/data/GTA5/images/                        % GTA images
<root_dir>/data/GTA5/labels/                        % Semantic segmentation labels
...
  • Cityscapes: Please follow the instructions in Cityscape to download the images and ground-truths. The Cityscapes dataset directory should have this basic structure:
<root_dir>/data/cityscapes/                         % Cityscapes dataset root
<root_dir>/data/cityscapes/leftImg8bit              % Cityscapes images
<root_dir>/data/cityscapes/leftImg8bit/train
<root_dir>/data/cityscapes/leftImg8bit/val
<root_dir>/data/cityscapes/gtFine                   % Semantic segmentation labels
<root_dir>/data/cityscapes/gtFine/train
<root_dir>/data/cityscapes/gtFine/val
...
  • Mapillary: Please follow the instructions in Mapillary Vistas to download the images and validation ground-truths. The Mapillary Vistas dataset directory should have this basic structure:
<root_dir>/data/mapillary/                          % Mapillary dataset root
<root_dir>/data/mapillary/train                     % Mapillary train set
<root_dir>/data/mapillary/train/images
<root_dir>/data/mapillary/validation                % Mapillary validation set
<root_dir>/data/mapillary/validation/images
<root_dir>/data/mapillary/validation/labels
...
  • IDD: Please follow the instructions in IDD to download the images and validation ground-truths. The IDD Segmentation dataset directory should have this basic structure:
<root_dir>/data/IDD/                         % IDD dataset root
<root_dir>/data/IDD/leftImg8bit              % IDD images
<root_dir>/data/IDD/leftImg8bit/train
<root_dir>/data/IDD/leftImg8bit/val
<root_dir>/data/IDD/gtFine                   % Semantic segmentation labels
<root_dir>/data/IDD/gtFine/val
...

Pre-trained models

Pre-trained models can be downloaded here and put in <root_dir>/pretrained_models

Running the code

For evaluation, execute:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt_pretrained.yml

Training

For the experiments done in the paper, we used pytorch 1.3.1 and CUDA 10.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times to reach the comparable performance.

By default, logs and snapshots are stored in <root_dir>/experiments with this structure:

<root_dir>/experiments/logs
<root_dir>/experiments/snapshots

To train the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To train the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To train the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Testing

To test the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To test the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To test the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT.

License

MTAF is released under the Apache 2.0 license.

Comments
  • question about adversarial training code in train_UDA.py

    question about adversarial training code in train_UDA.py

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation. pred_trg_main = interp_target(all_pred_trg_main[i+1]) ## what does [i+1] mean? pred_trg_main_list.append(pred_trg_main) pred_trg_target = interp_target(all_pred_trg_main[0]) ## what does [0] mean? pred_trg_target_list.append(pred_trg_target)

    In train_UDA.py, line 829-836, why should we use index[i+1] and [0]? What's the meaning of that? Also, where is the definition of the target-agnostic classifier in your code?

    Thanks again and look forward to hearing back from you!

    opened by yuzhang03 2
  • the problem for training loss

    the problem for training loss

    Thanks for enlightening work agian.

    I train the Mdis method for one source and one target, but I am confused for the loss, and I plot by tensorboard. And as I think, the adv loss should walk low and the discrimitor loss should walk higher. but in the loss below, the two losses oscillate around a number. whats wrong with it?

    Besides, I infer the training results should be better when training in manner of 1source 1target instead of 1source multi target. But in my training, I dont get good results.

    So hope your thought sincerely.

    And my training config: adv loss weight: 0.5 adv learning rate: 1e-5 seg learning rate: 1.25e-5

    adversarial loss of one source and one target
    image

    dicriminator loss of one source and one target image

    opened by slz929 2
  • problem for training data

    problem for training data

    Thanks for enlightening and practical work about multi-target DA ! I have read your paper, and I found one source dataset and 3 target datasets of unequal quantity, does the quantity of data for every domain matters? And what is the appropriate amount of training data for MTKT? Another question, I want to know why KL loss is used for knowledge transfer? If I want to train an embedding word instead of a segmentation map, is the KL loss appropriate, and is there a better alternative?

    opened by slz929 2
  • About the generation of segmentation color maps

    About the generation of segmentation color maps

    Thanks for the great research!

    I have a question though, the mIoU you report in your paper is for 7 classes, but the segmentation colour map in the qualitative analysis seems to be for the 19 classes commonly used in domain adaptive semantic segmentation.

    In other words, how can a model trained on 7 classes be used to generate a 19-class segmentation colour map? Or am I wrong in my understanding?

    I look forward to your response.

    Thank you!

    opened by liwei1101 1
  • About labels of IDD dataset

    About labels of IDD dataset

    Hello! @SportaXD Thank you for your great work!

    I was reproducing the code and noticed: the labels in the IDD dataset are in JSON file format instead of segmentation label form.

    How is this problem solved?

    opened by liwei1101 1
  • About MTKT code

    About MTKT code

    In train_UDA.py 758 line

            d_main_list[i] = d_main
            optimizer_d_main_list.append(optimizer_d_main)
            d_aux_list[i] = d_aux
            optimizer_d_aux_list.append(optimizer_d_aux)
    

    If this were done(d_main_list[i] = d_main and d_aux_list[i] = d_aux), it would make all the discriminators in the list use the same one, shouldn't there be one discriminator for each classifier?

    opened by liwei1101 1
  • About 'the multi-target baseline'

    About 'the multi-target baseline'

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation.

    d_main = get_fc_discriminator(num_classes=num_classes)
    d_main.train()
    d_main.to(device)
    d_aux = get_fc_discriminator(num_classes=num_classes)
    d_aux.train()
    d_aux.to(device)
    

    Can you tell me why the multi-domain baseline code does not use multiple discriminators but only one discriminator. It looks like a single domain approach. Thanks!

    opened by liwei1101 1
  • about eval_UDA.py

    about eval_UDA.py

    Thanks for sharing your codes.

    I was impressed with your good research.

    Could you explain why the output map is not resized for target size(cfg.TEST.OUTPUT_SIZE_TARGET) in the case of Mapillary dataset in line 57 of eval_UDA.py?

    When I tested the trained model on Mapillary dataset, inference took a long time due to the large resolution.

    I'm looking forward to hearing from you.

    Thank you!

    opened by jdg900 1
  • modifying info7class.json and train_UDA.py

    modifying info7class.json and train_UDA.py

    we have found a small bug in "./MTAF/mtaf/dataset/cityscapes_list/info7class.json". valeo

    It should be 7 Classes rather than 19 Classes in the configuration file. It appears in the Evaluation stage, where the result is printed out in the mIoU evaluation metrics and the names of the 7 classes.

    Also, there is a typo in the comments.

    opened by mohamedelmesawy 1
  • Running MTAF on a slightly different setup

    Running MTAF on a slightly different setup

    Hello, thanks for sharing the code and such a good contribution. I would like to run your method on a setup that is a bit different, specifically adapting from Cityscapes ---> BDD, Mapillary. I have seen that the code accepts Cityscapes for both source and target, so that shouldnt be a problem, and I have added a dataloader for BDD to be the target 1.

    In order to get the best performance, do I need to train the baseline and then train the method using MTKT or MDIS loading the baseline as pretrained? Or do I get the best performance directly by running the training script for MTKT or MDIS without the baseline?

    opened by fabriziojpiva 1
Owner
Valeo.ai
The GitHub account of Valeo.ai
Valeo.ai
TeST: Temporal-Stable Thresholding for Semi-supervised Learning

TeST: Temporal-Stable Thresholding for Semi-supervised Learning TeST Illustration Semi-supervised learning (SSL) offers an effective method for large-

Xiong Weiyu 1 Jul 14, 2022
《Single Image Reflection Removal Beyond Linearity》(CVPR 2019)

Single-Image-Reflection-Removal-Beyond-Linearity Paper Single Image Reflection Removal Beyond Linearity. Qiang Wen, Yinjie Tan, Jing Qin, Wenxi Liu, G

Qiang Wen 51 Jun 24, 2022
AgML is a comprehensive library for agricultural machine learning

AgML is a comprehensive library for agricultural machine learning. Currently, AgML provides access to a wealth of public agricultural datasets for common agricultural deep learning tasks.

Plant AI and Biophysics Lab 1 Jul 07, 2022
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Zhilu Zhang 53 Dec 20, 2022
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

86 Oct 05, 2022
Mahadi-Now - This Is Pakistani Just Now Login Tools

PAKISTANI JUST NOW LOGIN TOOLS Install apt update apt upgrade apt install python

MAHADI HASAN AFRIDI 19 Apr 06, 2022
PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

PocketNet This is the official repository of the paper: PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and M

Fadi Boutros 40 Dec 22, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

88 Dec 21, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
SelfRemaster: SSL Speech Restoration

SelfRemaster: Self-Supervised Speech Restoration Official implementation of SelfRemaster: Self-Supervised Speech Restoration with Analysis-by-Synthesi

Takaaki Saeki 46 Jan 07, 2023
Contrastive Fact Verification

VitaminC This repository contains the dataset and models for the NAACL 2021 paper: Get Your Vitamin C! Robust Fact Verification with Contrastive Evide

47 Dec 19, 2022
Rede Neural Convolucional feita durante o processo seletivo do Laboratório de Inteligência Artificial da FACOM (UFMS)

Primeira_Rede_Neural_Convolucional Rede Neural Convolucional feita durante o processo seletivo do Laboratório de Inteligência Artificial da FACOM (UFM

Roney_Felipe 1 Jan 13, 2022
This repo provides code for QB-Norm (Cross Modal Retrieval with Querybank Normalisation)

This repo provides code for QB-Norm (Cross Modal Retrieval with Querybank Normalisation) Usage example python dynamic_inverted_softmax.py --sims_train

36 Dec 29, 2022
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

NVIDIA Research Projects 188 Dec 27, 2022
World Models with TensorFlow 2

World Models This repo reproduces the original implementation of World Models. This implementation uses TensorFlow 2.2. Docker The easiest way to hand

Zac Wellmer 234 Nov 30, 2022
PyTorch(Geometric) implementation of G^2GNN in "Imbalanced Graph Classification via Graph-of-Graph Neural Networks"

This repository is an official PyTorch(Geometric) implementation of G^2GNN in "Imbalanced Graph Classification via Graph-of-Graph Neural Networks". Th

Yu Wang (Jack) 13 Nov 18, 2022
Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem

Benchmarking nearest neighbors Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but so far t

Erik Bernhardsson 3.2k Jan 03, 2023
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
Generating Band-Limited Adversarial Surfaces Using Neural Networks

Generating Band-Limited Adversarial Surfaces Using Neural Networks This is the official repository of the technical report that was published on arXiv

3 Jul 26, 2022