Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Overview

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment

This is a pytorch project for the paper Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment by Ruixing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu and Jiaya Jia presented at ICCV2021.

Introduction

It is important to enhance low-light videos where previous work is mostly trained on paired static images or paired videos of static scene. We instead propose a new dataset formed by our new strategies that contains high-quality spatially-aligned video pairs from dynamic scenes in low- and normal-light conditions. It is by building a mechatronic system to precisely control dynamics during the video capture process, and further align the video pairs, both spatially and temporally, by identifying the system's uniform motion stage. Besides the dataset, we also propose an end-to-end framework, in which we design a self-supervised strategy to reduce noise, while enhancing illumination based on the Retinex theory.

paper link

SDSD dataset

The SDSD dataset is collected as dynamic video pairs containing low-light and normal-light videos. This dataset is consists of two parts, i.e., the indoor subset and the outdoor subset. There are 70 video pairs in the indoor subset, and there are 80 video pairs in the outdoor subset.

All data is hosted on baidu pan (验证码: zcrb):
indoor_np: the data in the indoor subset utilized for training, all video frames are saved as .npy file and the resolution is 512 x 960 for fast training.
outdoor_np: the data in the outdoor subset utilized for training, all video frames are saved as .npy file and the resolution is 512 x 960 for fast training.
indoor_png: the original video data in the indoor subset. All frames are saved as .png file and the resolution is 1080 x 1920.
outdoor_png: the original video data in the outdoor subset. All frames are saved as .png file and the resolution is 1080 x 1920.

The evaluation setting could follow the following descriptions:

  1. randomly select 12 scenes from indoor subset and take others as the training data. The performance on indoor scene is computed on the first 30 frames in each of this 12 scenes, i.e., 360 frames.
  2. randomly select 13 scenes from outdoor subset and take others as the training data. The performance on indoor scene is computed on the first 30 frames in each of this 13 scenes, i.e., 390 frames. (the split of training and testing is pointed out by "testing_dir" in the corresponding config file)

The arrangement of the dataset is
--indoor/outdoor
----GT (the videos under normal light)
--------pair1
--------pair2
--------...
----LQ (the videos under low light)
--------pair1
--------pair2
--------...

After download the dataset, place them in './dataset' (you can also place the dataset in other place, once you modify "path_to_dataset" in the corresponding config file).

The smid dataset for training

Different from the original setting of SMID, our work aims to enhance sRGB videos rather than RAW videos. Thus, we first transfer the RAW data to sRGB data with rawpy. You can download the processed dataset for experiments using the following link: baidu pan (验证码: btux):

The arrangement of the dataset is
--smid
----SMID_Long_np (the frame under normal light)
--------0001
--------0002
--------...
----SMID_LQ_np (the frame under low light)
--------0001
--------0002
--------...

After download the dataset, place them in './dataset'. The arrangement of the dataset is the same as that of SDSD. You can also place the dataset in other place, once you modify "path_to_dataset" in the corresponding config file.

Project Setup

First install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
git clone --recursive [email protected]:dvlab-research/SDSD.git
cd SDSD
pip install -r requirements.txt

And compile the library of DCN:

python setup.py build
python setup.py develop
python setup.py install

Train

The training on indoor subset of SDSD:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4320 train.py -opt options/train/train_in_sdsd.yml --launcher pytorch

The training on outdoor subset of SDSD:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4320 train.py -opt options/train/train_out_sdsd.yml --launcher pytorch

The training on SMID:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4322 train.py -opt options/train/train_smid.yml --launcher pytorch

Quantitative Test

We use PSNR and SSIM as the metrics for evaluation.

For the evaluation on indoor subset of SDSD, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_in_sdsd.yml use the following command line:

python quantitative_test.py -opt options/test/test_in_sdsd.yml

For the evaluation on outdoor subset of SDSD, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_out_sdsd.yml use the following command line:

python quantitative_test.py -opt options/test/test_out_sdsd.yml

For the evaluation on SMID, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_smid.yml use the following command line:

python quantitative_test.py -opt options/test/test_smid.yml

Pre-trained Model

You can download our trained model using the following links: https://drive.google.com/file/d/1_V0Dxtr4dZ5xZuOsU1gUIUYUDKJvj7BZ/view?usp=sharing

the model trained with indoor subset in SDSD: indoor_G.pth
the model trained with outdoor subset in SDSD: outdoor_G.pth
the model trained with SMID: smid_G.pth

Qualitative Test

We provide the script to visualize the enhanced frames. Please download the pretrained models or use your trained models, and then use the following command line

python qualitative_test.py -opt options/test/test_in_sdsd.yml
python qualitative_test.py -opt options/test/test_out_sdsd.yml
python qualitative_test.py -opt options/test/test_smid.yml

Citation Information

If you find the project useful, please cite:

@inproceedings{wang2021sdsd,
  title={Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment},
  author={Ruixing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu and Jiaya Jia},
  booktitle={ICCV},
  year={2021}
}

Acknowledgments

This source code is inspired by EDVR.

Contributions

If you have any questions/comments/bug reports, feel free to e-mail the author Xiaogang Xu ([email protected]).

Owner
DV Lab
Deep Vision Lab
DV Lab
PyTorch implementation of ENet

PyTorch-ENet PyTorch (v1.1.0) implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from the lua-torc

David Silva 333 Dec 29, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
TorchXRayVision: A library of chest X-ray datasets and models.

torchxrayvision A library for chest X-ray datasets and models. Including pre-trained models. ( 🎬 promo video about the project) Motivation: While the

Machine Learning and Medicine Lab 575 Jan 08, 2023
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
Repo for the ACMMM20 submission: "Personalized breath based biometric authentication with wearable multimodality".

personalized-breath Repo for the ACMMM20 submission: "Personalized breath based biometric authentication with wearable multimodality". Guideline To ex

Manh-Ha Bui 2 Nov 15, 2021
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 01, 2022
Code for one-stage adaptive set-based HOI detector AS-Net.

AS-Net Code for one-stage adaptive set-based HOI detector AS-Net. Mingfei Chen*, Yue Liao*, Si Liu, Zhiyuan Chen, Fei Wang, Chen Qian. "Reformulating

Mingfei Chen 45 Dec 09, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
Transfer Learning Shootout for PyTorch's model zoo (torchvision)

pytorch-retraining Transfer Learning shootout for PyTorch's model zoo (torchvision). Load any pretrained model with custom final layer (num_classes) f

Alexander Hirner 169 Jun 29, 2022
A toolset for creating Qualtrics-based IAT experiments

Qualtrics IAT Tool A web app for generating the Implicit Association Test (IAT) running on Qualtrics Online Web App The app is hosted by Streamlit, a

0 Feb 12, 2022
Omniscient Video Super-Resolution

Omniscient Video Super-Resolution This is the official code of OVSR (Omniscient Video Super-Resolution, ICCV 2021). This work is based on PFNL. Datase

36 Oct 27, 2022
[ICCV 2021 Oral] Mining Latent Classes for Few-shot Segmentation

Mining Latent Classes for Few-shot Segmentation Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao. This codebase contains baseline of our paper Mini

Lihe Yang 66 Nov 29, 2022
v objective diffusion inference code for PyTorch.

v-diffusion-pytorch v objective diffusion inference code for PyTorch, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman). The

Katherine Crowson 635 Dec 30, 2022
Open-CyKG: An Open Cyber Threat Intelligence Knowledge Graph

Open-CyKG: An Open Cyber Threat Intelligence Knowledge Graph Model Description Open-CyKG is a framework that is constructed using an attenti

Injy Sarhan 34 Jan 05, 2023
Data Augmentation Using Keras and Python

Data-Augmentation-Using-Keras-and-Python Data augmentation is the process of increasing the number of training dataset. Keras library offers a simple

Happy N. Monday 3 Feb 15, 2022
PyTorch code for Composing Partial Differential Equations with Physics-Aware Neural Networks

FInite volume Neural Network (FINN) This repository contains the PyTorch code for models, training, and testing, and Python code for data generation t

Cognitive Modeling 20 Dec 18, 2022
FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

FairEdit Relevent Publication FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

5 Feb 04, 2022