Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Related tags

Deep LearningCIConv
Overview

Zero-Shot Domain Adaptation with a Physics Prior

[arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and Jan van Gemert.

This repository contains the PyTorch implementation of Color Invariant Convolutions and all experiments and datasets described in the paper.

Abstract

We explore the zero-shot setting for day-night domain adaptation. The traditional domain adaptation setting is to train on one domain and adapt to the target domain by exploiting unlabeled data samples from the test set. As gathering relevant test data is expensive and sometimes even impossible, we remove any reliance on test data imagery and instead exploit a visual inductive prior derived from physics-based reflection models for domain adaptation. We cast a number of color invariant edge detectors as trainable layers in a convolutional neural network and evaluate their robustness to illumination changes. We show that the color invariant layer reduces the day-night distribution shift in feature map activations throughout the network. We demonstrate improved performance for zero-shot day to night domain adaptation on both synthetic as well as natural datasets in various tasks, including classification, segmentation and place recognition.

Getting started

All code and experiments have been tested with PyTorch 1.7.0.

Create a local clone of this repository:

git clone https://github.com/Attila94/CIConv

The method directory contains the color invariant convolution (CIConv) layer, as well as custom ResNet and VGG models using the CIConv layer. To use the CIConv layer in your own architecture, simply copy ciconv2d.py to the desired directory and add it as a regular PyTorch layer as

from ciconv2d import CIConv2d
ciconv = CIConv2d('W', k=3, scale=0.0)

See resnet.py and vgg.py for examples.

Datasets

Shapenet Illuminants

[Download link]

Shapenet Illuminants is used in the synthetic classification experiment. The images are rendered from a subset of the ShapeNet dataset using the physically based renderer Mitsuba. The scene is illuminated by a point light modeled as a black-body radiator with temperatures ranging between [1900, 20000] K and an ambient light source. The training set contains 1,000 samples for each of the 10 object classes recorded under "normal" lighting conditions (T = 6500 K). Multiple test sets with 300 samples per class are rendered for a variety of light source intensities and colors.

shapenet_illuminants

Common Objects Day and Night

[Download link]

Common Objects Day and Night (CODaN) is a natural day-night image classification dataset. More information can be found on the separate Github repository: https://github.com/Attila94/CODaN.

codan

Experiments

1. Synthetic classification

  1. Download [link] and unpack the Shapenet Illuminants dataset.
  2. In your local CIConv clone navigate to experiments/1_synthetic_classification and run
python train.py --root 'path/to/shapenet_illuminants' --hflip --seed 0 --invariant 'W'

This will train a ResNet-18 with the 'W' color invariant from scratch and evaluate on all test sets.

shapenet_illuminants_results

Classification accuracy of ResNet-18 with various color invariants. RGB (not invariant) performance degrades when illumination conditions differ between train and test set, while color invariants remain more stable. W performs best overall.

2. CODaN classification

  1. Download the Common Objects Day and Night (CODaN) dataset from https://github.com/Attila94/CODaN.
  2. In your local CIConv clone navigate to experiments/2_codan_classification and run
python train.py --root 'path/to/codan' --invariant 'W' --scale 0. --hflip --jitter 0.3 --rr 20 --seed 0

This will train a ResNet-18 with the 'W' color invariant from scratch and evaluate on all test sets.

Selected results from the paper:

Method Day (% accuracy) Night (% accuracy)
Baseline 80.39 +- 0.38 48.31 +- 1.33
E 79.79 +- 0.40 49.95 +- 1.60
W 81.49 +- 0.49 59.67 +- 0.93
C 78.04 +- 1.08 53.44 +- 1.28
N 77.44 +- 0.00 52.03 +- 0.27
H 75.20 +- 0.56 50.52 +- 1.34

3. Semantic segmentation

  1. Download and unpack the following public datasets: Cityscapes, Nighttime Driving, Dark Zurich.

  2. In your local CIConv clone navigate to experiments/3_segmentation.

  3. Set the proper dataset locations in train.py.

  4. Run

    python train.py --hflip --rc --jitter 0.3 --scale 0.3 --batch-size 6 --pretrained --invariant 'W'

Selected results from the paper:

Method Nighttime Driving (mIoU) Dark Zurich (mIoU)
RefineNet [baseline] 34.1 30.6
W-RefineNet [ours] 41.6 34.5

4. Visual place recognition

  1. Setup conda environment

    conda create -n ciconv python=3.9 mamba -c conda-forge
    conda activate ciconv
    mamba install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 scikit-image -c pytorch
  2. Navigate to experiments/4_visual_place_recognition/cnnimageretrieval-pytorch/.

  3. Run

    git submodule update --init # download a fork of cnnimageretrieval-pytorch
    sh cirtorch/utils/setup_tests.sh # download datasets and pre-trained models 
    python3 -m cirtorch.examples.test --network-path data/networks/retrieval-SfM-120k_w_resnet101_gem/model.path.tar --multiscale '[1, 1/2**(1/2), 1/2]' --datasets '247tokyo1k' --whitening 'retrieval-SfM-120k'
  4. Use --network-path retrievalSfM120k-resnet101-gem to compare against the vanilla method (without using the color invariant trained ResNet101).

  5. Use --datasets 'gp_dl_nr' to test on the GardensPointWalking dataset.

Selected results from the paper:

Method Tokyo 24/7 (mAP)
ResNet101 GeM [baseline] 85.0
W-ResNet101 GeM [ours] 88.3

Citation

If you find this repository useful for your work, please cite as follows:

@article{lengyel2021zeroshot,
      title={Zero-Shot Domain Adaptation with a Physics Prior}, 
      author={Attila Lengyel and Sourav Garg and Michael Milford and Jan C. van Gemert},
      year={2021},
      eprint={2108.05137},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Attila Lengyel
PhD candidate @ TU Delft Computer Vision Lab.
Attila Lengyel
PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Christoph Reich 100 Dec 01, 2022
Interactive dimensionality reduction for large datasets

BlosSOM 🌼 BlosSOM is a graphical environment for running semi-supervised dimensionality reduction with EmbedSOM. You can use it to explore multidimen

19 Dec 14, 2022
An open-source outlier detection package by Getcontact Data Team

pyfbad The pyfbad library supports anomaly detection projects. An end-to-end anomaly detection application can be written using the source codes of th

Teknasyon Tech 41 Dec 27, 2022
This is the official Pytorch implementation of "Lung Segmentation from Chest X-rays using Variational Data Imputation", Raghavendra Selvan et al. 2020

README This is the official Pytorch implementation of "Lung Segmentation from Chest X-rays using Variational Data Imputation", Raghavendra Selvan et a

Raghav 42 Dec 15, 2022
Semi-supevised Semantic Segmentation with High- and Low-level Consistency

Semi-supevised Semantic Segmentation with High- and Low-level Consistency This Pytorch repository contains the code for our work Semi-supervised Seman

123 Dec 30, 2022
PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods.

PEPit: Performance Estimation in Python This open source Python library provides a generic way to use PEP framework in Python. Performance estimation

Baptiste 53 Nov 16, 2022
official implementation for the paper "Simplifying Graph Convolutional Networks"

Simplifying Graph Convolutional Networks Updates As pointed out by #23, there was a subtle bug in our preprocessing code for the reddit dataset. After

Tianyi 727 Jan 01, 2023
ATAC: Adversarially Trained Actor Critic

ATAC: Adversarially Trained Actor Critic Adversarially Trained Actor Critic for Offline Reinforcement Learning by Ching-An Cheng*, Tengyang Xie*, Nan

Microsoft 41 Dec 08, 2022
Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Do Not Trust Prediction Scores for Membership Inference Attacks Abstract: Membership inference attacks (MIAs) aim to determine whether a specific samp

<a href=[email protected]"> 3 Oct 25, 2022
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
Final project code: Implementing MAE with downscaled encoders and datasets, for ESE546 FA21 at University of Pennsylvania

546 Final Project: Masked Autoencoder Haoran Tang, Qirui Wu 1. Training To train the network, please run mae_pretraining.py. Please modify folder path

Haoran Tang 0 Apr 22, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
Learning Visual Words for Weakly-Supervised Semantic Segmentation

[IJCAI 2021] Learning Visual Words for Weakly-Supervised Semantic Segmentation Implementation of IJCAI 2021 paper Learning Visual Words for Weakly-Sup

Lixiang Ru 24 Oct 05, 2022
Code for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning Pytorch Implementation for DisCo: Remedy Self-supervi

79 Jan 06, 2023
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
Making a music video with Wav2CLIP and VQGAN-CLIP

music2video Overview A repo for making a music video with Wav2CLIP and VQGAN-CLIP. The base code was derived from VQGAN-CLIP The CLIP embedding for au

Joel Jang | 장요엘 163 Dec 26, 2022
Asynchronous Advantage Actor-Critic in PyTorch

Asynchronous Advantage Actor-Critic in PyTorch This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learn

Reiji Hatsugai 38 Dec 12, 2022
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
Deep Learning for Time Series Forecasting.

nixtlats:Deep Learning for Time Series Forecasting [nikstla] (noun, nahuatl) Period of time. State-of-the-art time series forecasting for pytorch. Nix

Nixtla 5 Dec 06, 2022
A Python framework for conversational search

Chatty Goose Multi-stage Conversational Passage Retrieval: An Approach to Fusing Term Importance Estimation and Neural Query Rewriting Installation Ma

Castorini 36 Oct 23, 2022