An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Overview

Semisupervised Multitask Learning

This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch.

This code primarily deals with the tasks of sematic segmentation, instance segmentation, depth prediction learned in a multi-task setting (with a shared encoder) on a synthetic dataset and then adapted to another dataset with a domain shift. Specifically for this implementation the aim is to learn the three tasks on the Cityscapes Dataset, then adapt and evaluate performance in a fully unsupervised or a semi-supervised setting on the IDD Dataset.

The architecture used for the semantic and instance segmentation model is taken from Panoptic Deeplab[2]. While a choice for the depth decoder is offered between BTS[3] and FCRN-Depth[4].

Usage

The following commands can be used to run the codebase, please make sure to see the respective papers for more details.

  1. To train the base encoder on the Cityscapes (or any other dataset with appropriate modifications) use the following command. Additional flags can also be set as required:

    python base_trainer.py --name BaseRun --cityscapes_dir /path/to/cityscapes

  2. Then train the CCR Regularizer as proposed in UM-Adapt with the following command:

    python ccr_trainer.py --base_name BaseRun --cityscapes_dir /path/to/cityscapes --hed_path /path/to/pretrained/HED-Network

  3. Unsupervised adaptation to IDD can now be performed using:

    python idd_adapter.py --name AdaptIDD --base_name BaseRun --cityscapes_dir /path/to/cityscapes --idd_dir /path/to/idd --hed_path /path/to/pretrained/HED-Network

  4. Further optional semi-supervised fine-tuning can be done using:

    python idd_supervised.py --name SupervisedIDD --base_name BaseRun --idd_name AdaptIDD --idd_epoch 10 --idd_dir /path/to/idd --hed_path /path/to/pretrained/HED-Network --supervised_pct 0.5

The code can generally be modified to suit any dataset as required, the base architectures of different decoders as well as the shared encoders can also be altered as needed.

References

If you find this code helpful in your research, please consider citing the following papers.

[1]  @inproceedings{Kundu_2019_ICCV,
        author = {Kundu, Jogendra Nath and Lakkakula, Nishank and Babu, R. Venkatesh},
        title = {UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation},
        booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
        month = {October},
        year = {2019}
    }
[2]  @inproceedings{cheng2020panoptic,
        author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
        title={Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation},
        booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        month = {June},
        year = {2020}
    }
[3]  @article{lee2019big,
        title={From big to small: Multi-scale local planar guidance for monocular depth estimation},
        author={Lee, Jin Han and Han, Myung-Kyu and Ko, Dong Wook and Suh, Il Hong},
        journal={arXiv preprint arXiv:1907.10326},
        year={2019}
}
[4]  @inproceedings{Xie_ICCV_2015,
         author = {Saining Xie and Zhuowen Tu},
         title = {Holistically-Nested Edge Detection},
         booktitle = {IEEE International Conference on Computer Vision},
         year = {2015}
     }
[5]  @misc{pytorch-hed,
         author = {Simon Niklaus},
         title = {A Reimplementation of {HED} Using {PyTorch}},
         year = {2018},
         howpublished = {\url{https://github.com/sniklaus/pytorch-hed}}
    }

If you use either of Cityscapes or IDD datasets, consider citing them

@inproceedings{Cordts2016Cityscapes,
    title={The Cityscapes Dataset for Semantic Urban Scene Understanding},
    author={Cordts, Marius and Omran, Mohamed and Ramos, Sebastian and Rehfeld, Timo and Enzweiler, Markus and Benenson, Rodrigo and Franke, Uwe and Roth, Stefan and Schiele, Bernt},
    booktitle={Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2016}
}
@article{DBLP:journals/corr/abs-1811-10200,,
    title={IDD: A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained Environments},
    author = {Varma, Girish and Subramanian, Anbumani and Namboodiri, Anoop and Chandraker, Manmohan and Jawahar, C.V.}
    journal={arXiv preprint arXiv:1811.10200},
    year={2018}

Finally, if you use the Xception backbone, please consider citing

@inproceedings{deeplabv3plus2018,
    title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
    author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
    booktitle={ECCV},
    year={2018}
}

Acknowledgements

Utility functions from many wonderful open-source projects were used, I would like to especially thank the authors of:

Owner
Abhinav Atrishi
Abhinav Atrishi
Code for database and frontend of webpage for Neural Fields in Visual Computing and Beyond.

Neural Fields in Visual Computing—Complementary Webpage This is based on the amazing MiniConf project from Hendrik Strobelt and Sasha Rush—thank you!

Brown University Visual Computing Group 29 Nov 30, 2022
A collection of inference modules for fastai2

fastinference A collection of inference modules for fastai including inference speedup and interpretability Install pip install fastinference There ar

Zachary Mueller 83 Oct 10, 2022
🏃‍♀️ A curated list about human motion capture, analysis and synthesis.

Awesome Human Motion 🏃‍♀️ A curated list about human motion capture, analysis and synthesis. Contents Introduction Human Models Datasets Data Process

Dennis Wittchen 274 Dec 14, 2022
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raúl Ortega 3 Jan 09, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

238 Dec 22, 2022
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022
ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021)

ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021) Project Page | Video | Paper | Data We present a novel metho

65 Nov 28, 2022
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng Internati

Princeton Vision & Learning Lab 115 Jan 04, 2023
Simulation-based performance analysis of server-less Blockchain-enabled Federated Learning

Blockchain-enabled Server-less Federated Learning Repository containing the files used to reproduce the results of the publication "Blockchain-enabled

Francesc Wilhelmi 9 Sep 27, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
Weakly Supervised Scene Text Detection using Deep Reinforcement Learning

Weakly Supervised Scene Text Detection using Deep Reinforcement Learning This repository contains the setup for all experiments performed in our Paper

Emanuel Metzenthin 3 Dec 16, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
Multi agent DDPG algorithm written in Python + Pytorch

Multi agent DDPG algorithm written in Python + Pytorch. It also includes a Jupyter notebook, Tennis.ipynb, as a showcase.

Rogier Wachters 2 Feb 26, 2022
Semi-automated OpenVINO benchmark_app with variable parameters

Semi-automated OpenVINO benchmark_app with variable parameters. User can specify multiple options for any parameters in the benchmark_app and the progam runs the benchmark with all combinations of gi

Yasunori Shimura 8 Apr 11, 2022
Efficient Lottery Ticket Finding: Less Data is More

The lottery ticket hypothesis (LTH) reveals the existence of winning tickets (sparse but critical subnetworks) for dense networks, that can be trained in isolation from random initialization to match

VITA 20 Sep 04, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
Feedback is important: response-aware feedback mechanism for background based conversation

RFM The code for the paper: "Feedback is important: response-aware feedback mechanism for background based conversation." Requirements python 3.7 pyto

Jiatao Chen 2 Sep 29, 2022
MazeRL is an application oriented Deep Reinforcement Learning (RL) framework

MazeRL is an application oriented Deep Reinforcement Learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life cycle of RL applications ra

EnliteAI GmbH 222 Dec 24, 2022
Script for getting information in discord

User-info.py Script for getting information in https://discord.com/ Instalação: apt-get update -y apt-get upgrade -y apt-get install git pkg install

Moleey 1 Dec 18, 2021