Official implementation of Deep Burst Super-Resolution

Overview

Deep-Burst-SR

Official implementation of Deep Burst Super-Resolution

Publication: Deep Burst Super-Resolution. Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. CVPR 2021 [Arxiv]

Overview

While single-image super-resolution (SISR) has attracted substantial interest in recent years, the proposed approaches are limited to learning image priors in order to add high frequency details. In contrast, multi-frame superresolution (MFSR) offers the possibility of reconstructing rich details by combining signal information from multiple shifted images. This key advantage, along with the increasing popularity of burst photography, have made MFSR an important problem for real-world applications. We propose a novel architecture for the burst superresolution task. Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output. This is achieved by explicitly aligning deep embeddings of the input frames using pixel-wise optical flow. The information from all frames are then adaptively merged using an attention-based fusion module. In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset, consisting of smartphone bursts and high-resolution DSLR ground-truth.

dbsr overview figure [Comparison of our Deep Burst SR apporach with Single Image baseline for 4x super-resolution of RAW burst captured from Samsung Galaxy S8]

Table of Contents

Installation

Clone the Git repository.

git clone https://github.com/goutamgmb/deep-burst-sr.git

Install dependencies

Run the installation script to install all the dependencies. You need to provide the conda install path (e.g. ~/anaconda3) and the name for the created conda environment (here env-dbsr).

bash install.sh conda_install_path env-dbsr

This script will also download the default DBSR networks and create default environment settings.

Update environment settings

The environment setting file admin/local.py contains the paths for pre-trained networks, datasets etc. Update the paths in local.py according to your local environment.

Toolkit Overview

The toolkit consists of the following sub-modules.

  • admin: Includes functions for loading networks, tensorboard etc. and also contains environment settings.
  • data: Contains functions for generating synthetic bursts, camera pipeline, processing data (e.g. loading images, data augmentations).
  • data_specs: Information about train/val splits of different datasets.
  • dataset: Contains integration of datasets such as BurstSR, SyntheticBurst, ZurichRAW2RGB.
  • evaluation: Scripts to run and evaluate models on standard datasets.
  • external: External dependencies, e.g. PWCNet.
  • models: Contains different layers and network definitions.
  • util_scripts: Util scripts to e.g. download datasets.
  • utils: General utility functions for e.g. plotting, data type conversions, loading networks.

Datasets

The toolkit provides integration for following datasets which can be used to train/evaluate the models.

Zurich RAW to RGB Canon set

The RGB images from the training split of the Zurich RAW to RGB mapping dataset can be used to generate synthetic bursts for training using the SyntheticBurstProcessing class in data/processing.py.

Preparation: Download the Zurich RAW to RGB canon set from here and unpack the zip folder. Set the zurichraw2rgb_dir variable in admin/local.py to point to the unpacked dataset directory.

SyntheticBurst validation set

The pre-generated synthetic validation set used for evaluating the models. The dataset contains 300 synthetic bursts, each containing 14 RAW images. The synthetic bursts are generated from the RGB images from the test split of the Zurich RAW to RGB mapping dataset. The dataset can be loaded using SyntheticBurstVal class in dataset/synthetic_burst_val_set.py file.

Preparation: Downloaded the dataset here and unpack the zip file. Set the synburstval_dir variable in admin/local.py to point to the unpacked dataset directory.

BurstSR dataset (cropped)

The BurstSR dataset containing RAW bursts captured from Samsung Galaxy S8 and corresponding HR ground truths captured using a DSLR camera. This is the pre-processed version of the dataset that contains roughly aligned crops from the original images. The dataset can be loaded using BurstSRDataset class in dataset/burstsr_dataset.py file. Please check the DBSR paper for more details.

Preparation: The dataset has been split into 10 parts and can be downloaded and unpacked using the util_scripts/download_burstsr_dataset.py script. Set the burstsr_dir variable in admin/local.py to point to the unpacked BurstSR dataset directory.

BurstSR dataset (full)

The BurstSR dataset containing RAW bursts captured from Samsung Galaxy S8 and corresponding HR ground truths captured using a DSLR camera. This is the raw version of the dataset containing the full burst images in dng format.

Preparation: The dataset can be downloaded and unpacked using the util_scripts/download_raw_burstsr_data.py script.

Evaluation

You can run the trained model on RAW bursts to generate HR RGB images and compute the quality of predictions using the evaluation module.

Note: Please prepare the necessary datasets as explained in Datasets section before running the models.

Evaluate on SyntheticBurst validation set

You can evaluate the models on SyntheticBurst validation set using evaluation/synburst package. First create an experiment setting in evaluation/synburst/experiments containing the list of models to evaluate. You can start with the provided setting dbsr_default.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from SyntheticBurst dataset by running

python evaluation/synburst/save_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. dbsr_default). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear sensor color space (i.e. color space of input RAW burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) using the compute_score.py script

python evaluation/synburst/compute_score.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. dbsr_default). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/synburst/compute_score.py EXPERIMENT_NAME --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, in sRGB format using the visualize_results.py script.

python evaluation/synburst/visualize_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. dbsr_default). The script will display the predictions of each model in sRGB format, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Evaluate on BurstSR validation set

You can evaluate the models on BurstSR validation set using evaluation/burstsr package. First create an experiment setting in evaluation/burstsr/experiments containing the list of models to evaluate. You can start with the provided setting dbsr_default.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from BurstSR val dataset by running

python evaluation/burstsr/save_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. dbsr_default). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear sensor color space (i.e. color space of input RAW burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) after spatial and color alignment (see paper for details) using the compute_score.py script

python evaluation/burstsr/compute_score.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. dbsr_default). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/burstsr/compute_score.py EXPERIMENT_NAME --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, in sRGB format using the visualize_results.py script.

python evaluation/burstsr/visualize_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. dbsr_default). The script will display the predictions of each model in sRGB format, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Model Zoo

Here, we provide pre-trained network weights and report their performance.

Note: The models have been retrained using the cleaned up code, and thus can have small performance differences compared to the models used for the paper.

SyntheticBurst models

The models are evaluated using all 14 burst images.

Model PSNR MS-SSIM LPIPS Links Notes
CVPR2021 39.09 0.945 0.084 - CVPR2021 results
dbsr_synthetic_default 39.17 0.946 0.081 model Official retrained model
BurstSR models

The models are evaluated using all 14 burst images. The metrics are computed after spatial and color alignment, as described in DBSR paper.

Model PSNR MS-SSIM LPIPS Links Notes
CVPR2021 47.76 0.984 0.030 - CVPR2021 results
dbsr_burstsr_default 47.70 0.984 0.029 model Official retrained model

Training

We are still waiting for approval from our project sponsors to release the training codes. Hopefully we can soon release it. Meanwhile, please free to contact us in case of any questions regarding training.

Acknowledgement

The toolkit uses code from the following projects:

Owner
Goutam Bhat
Goutam Bhat
A Large Scale Benchmark for Individual Treatment Effect Prediction and Uplift Modeling

large-scale-ITE-UM-benchmark This repository contains code and data to reproduce the results of the paper "A Large Scale Benchmark for Individual Trea

10 Nov 19, 2022
A tool for making map images from OpenTTD save games

OpenTTD Surveyor A tool for making map images from OpenTTD save games. This is not part of the main OpenTTD codebase, nor is it ever intended to be pa

Aidan Randle-Conde 9 Feb 15, 2022
PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Christoph Reich 100 Dec 01, 2022
Codebase for Time-series Generative Adversarial Networks (TimeGAN)

Codebase for Time-series Generative Adversarial Networks (TimeGAN)

Jinsung Yoon 532 Dec 31, 2022
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)

T-Zero This repository serves primarily as codebase and instructions for training, evaluation and inference of T0. T0 is the model developed in Multit

BigScience Workshop 253 Dec 27, 2022
KIND: an Italian Multi-Domain Dataset for Named Entity Recognition

KIND (Kessler Italian Named-entities Dataset) KIND is an Italian dataset for Named-Entity Recognition. It contains more than one million tokens with t

Digital Humanities 5 Jun 21, 2022
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023
Distance correlation and related E-statistics in Python

dcor dcor: distance correlation and related E-statistics in Python. E-statistics are functions of distances between statistical observations in metric

Carlos Ramos CarreƱo 108 Dec 27, 2022
1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

Lihe Yang 209 Jan 01, 2023
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

Salesforce 1.3k Dec 28, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

2.6k Jan 04, 2023
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

101 Nov 25, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
Python scripts for performing stereo depth estimation using the HITNET Tensorflow model.

HITNET-Stereo-Depth-estimation Python scripts for performing stereo depth estimation using the HITNET Tensorflow model from Google Research. Stereo de

Ibai Gorordo 76 Jan 02, 2023
Preparation material for Dropbox interviews

Dropbox-Onsite-Interviews A guide for the Dropbox onsite interview! The Dropbox interview question bank is very small. The bank has been in a Chinese

386 Dec 31, 2022
In this repo we reproduce and extend results of Learning in High Dimension Always Amounts to Extrapolation by Balestriero et al. 2021

In this repo we reproduce and extend results of Learning in High Dimension Always Amounts to Extrapolation by Balestriero et al. 2021. Balestriero et

Sean M. Hendryx 1 Jan 27, 2022
Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

87 Dec 11, 2022
Official Pytorch implementation for AAAI2021 paper (RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning)

RSPNet Official Pytorch implementation for AAAI2021 paper "RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning" [Suppleme

35 Jun 24, 2022
Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI

EmotionUI Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI. demo screenshot (with RealSense) required packages Python = 3.6 num

Yang Jiao 2 Dec 23, 2021