Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Related tags

Deep LearningRNW
Overview

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Kun Wang, Zhenyu Zhang, Zhiqiang Yan, Xiang Li, Baobei Xu, Jun Li and Jian Yang

PCA Lab, Nanjing University of Science and Technology; Tencent YouTu Lab; Hikvision Research Institute

Introduction

This is the official repository for Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark. You can find our paper at arxiv. In this repository, we release the training and testing code, as well as the data split files of RobotCar-Night and nuScenes-Night.

image-20211002220051137

Dependency

  • python>=3.6
  • torch>=1.7.1
  • torchvision>=0.8.2
  • mmcv>=1.3
  • pytorch-lightning>=1.4.5
  • opencv-python>=3.4
  • tqdm>=4.53

Dataset

The dataset used in our work is based on RobotCar and nuScenes. Please visit their official website to download the data (We only used a part of these datasets. If you just want to run the code, (2014-12-16-18-44-24, 2014-12-09-13-21-02) of RobotCar and (Package 01, 02, 05, 09, 10) of nuScenes is enough). To produce the ground truth depth, you can use the above official toolboxes. After preparing datasets, we strongly recommend you to organize the directory structure as follows. The split files are provided in split_files/.

RobotCar-Night root directory
|__Package name (e.g. 2014-12-16-18-44-24)
   |__depth (to store the .npy ground truth depth maps)
      |__ground truth depth files
   |__rgb (to store the .png color images)
      |__color image files
   |__intrinsic.npy (to store the camera intrinsics)
   |__test_split.txt (to store the test samples)
   |__train_split.txt (to store the train samples)
nuScenes-Night root directory
|__sequences (to store sequence data)
   |__video clip number (e.g. 00590cbfa24a430a8c274b51e1c71231)
      |__file_list.txt (to store the image file names in this video clip)
      |__intrinsic.npy (to store the camera intrinsic of this video clip)
      |__image files described in file_list.txt
|__splits (to store split files)
   |__split files with name (day/night)_(train/test)_split.txt
|__test
   |__color (to store color images for testing)
   |__gt (to store ground truth depth maps w.r.t color)

Note: You also need to configure the dataset path in datasets/common.py. The original resolution of nuScenes is too high, so we reduce its resolution to half when training.

Training

Our model is trained using Distributed Data Parallel supported by Pytorch-Lightning. You can train a RNW model on one dataset through the following two steps:

  1. Train a self-supervised model on daytime data, by

    python train.py mono2_(rc/ns)_day number_of_your_gpus
  2. Train RNW by

    python train.py rnw_(rc/ns) number_of_your_gpus

Since there is no eval split, checkpoints will be saved every two epochs.

Testing

You can run the following commands to test on RobotCar-Night

python test_robotcar_disp.py day/night config_name checkpoint_path
cd evaluation
python eval_robotcar.py day/night

To test on nuScenes-Night, you can run

python test_nuscenes_disp.py day/night config_name checkpoint_path
cd evaluation
python eval_nuscenes.py day/night

Besides, you can use the scripts batch_eval_robotcar.py and batch_eval_nuscenes.py to automatically execute the above commands.

Citation

If you find our work useful, please consider citing our paper

@InProceedings{Wang_2021_ICCV,
    author    = {Wang, Kun and Zhang, Zhenyu and Yan, Zhiqiang and Li, Xiang and Xu, Baobei and Li, Jun and Yang, Jian},
    title     = {Regularizing Nighttime Weirdness: Efficient Self-Supervised Monocular Depth Estimation in the Dark},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {16055-16064}
}
Owner
kunwang
kunwang
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.

ONNX-HITNET-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in ONNX. Stereo depth estimation on

Ibai Gorordo 30 Nov 08, 2022
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

943 Jan 07, 2023
PyTorch implementation of SIFT descriptor

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
PyTorch implementation of the paper Dynamic Token Normalization Improves Vision Transfromers.

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
State-Relabeling Adversarial Active Learning

State-Relabeling Adversarial Active Learning Code for SRAAL [2020 CVPR Oral] Requirements torch = 1.6.0 numpy = 1.19.1 tqdm = 4.31.1 AL Results The

10 Jul 14, 2022
Multi-resolution SeqMatch based long-term Place Recognition

MRS-SLAM for long-term place recognition In this work, we imply an multi-resolution sambling based visual place recognition method. This work is based

METASLAM 6 Dec 06, 2022
Molecular AutoEncoder in PyTorch

MolEncoder Molecular AutoEncoder in PyTorch Install $ git clone https://github.com/cxhernandez/molencoder.git && cd molencoder $ python setup.py insta

Carlos Hernández 80 Dec 05, 2022
CAST: Character labeling in Animation using Self-supervision by Tracking

CAST: Character labeling in Animation using Self-supervision by Tracking (Published as a conference paper at EuroGraphics 2022) Note: The CAST paper c

15 Nov 18, 2022
Source Code and data for my paper titled Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chinese Question Matching

Description The source code and data for my paper titled Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chin

Zhengxiang Wang 3 Jun 28, 2022
Face Recognition & AI Based Smart Attendance Monitoring System.

In today’s generation, authentication is one of the biggest problems in our society. So, one of the most known techniques used for authentication is h

Sagar Saha 1 Jan 14, 2022
Office source code of paper UniFuse: Unidirectional Fusion for 360$^\circ$ Panorama Depth Estimation

UniFuse (RAL+ICRA2021) Office source code of paper UniFuse: Unidirectional Fusion for 360$^\circ$ Panorama Depth Estimation, arXiv, Demo Preparation I

Alibaba 47 Dec 26, 2022
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 03, 2023
[CVPR 2021] Exemplar-Based Open-Set Panoptic Segmentation Network (EOPSN)

EOPSN: Exemplar-Based Open-Set Panoptic Segmentation Network (CVPR 2021) PyTorch implementation for EOPSN. We propose open-set panoptic segmentation t

Jaedong Hwang 49 Dec 30, 2022
⚓ Eurybia monitor model drift over time and securize model deployment with data validation

View Demo · Documentation · Medium article 🔍 Overview Eurybia is a Python library which aims to help in : Detecting data drift and model drift Valida

MAIF 172 Dec 27, 2022
PantheonRL is a package for training and testing multi-agent reinforcement learning environments.

PantheonRL is a package for training and testing multi-agent reinforcement learning environments. PantheonRL supports cross-play, fine-tuning, ad-hoc coordination, and more.

Stanford Intelligent and Interactive Autonomous Systems Group 57 Dec 28, 2022
Towards Long-Form Video Understanding

Towards Long-Form Video Understanding Chao-Yuan Wu, Philipp Krähenbühl, CVPR 2021 [Paper] [Project Page] [Dataset] Citation @inproceedings{lvu2021,

Chao-Yuan Wu 69 Dec 26, 2022
PuppetGAN - Cross-Domain Feature Disentanglement and Manipulation just got way better! 🚀

Better Cross-Domain Feature Disentanglement and Manipulation with Improved PuppetGAN Quite cool... Right? Introduction This repo contains a TensorFlow

Giorgos Karantonis 5 Aug 25, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022