Official implementation of Deep Reparametrization of Multi-Frame Super-Resolution and Denoising

Related tags

Deep Learningdeep-rep
Overview

Deep-Rep-MFIR

Official implementation of Deep Reparametrization of Multi-Frame Super-Resolution and Denoising

Publication: Deep Reparametrization of Multi-Frame Super-Resolution and Denoising. Goutam Bhat, Martin Danelljan, Fisher Yu, Luc Van Gool, and Radu Timofte. ICCV 2021 oral [Arxiv]

Note: The code for our CVPR2021 paper "Deep Burst Super-Resolution" is available at goutamgmb/deep-burst-sr

Overview

We propose a deep reparametrization of the maximum a posteriori formulation commonly employed in multi-frame image restoration tasks. Our approach is derived by introducing a learned error metric and a latent representation of the target image, which transforms the MAP objective to a deep feature space. The deep reparametrization allows us to directly model the image formation process in the latent space, and to integrate learned image priors into the prediction. Our approach thereby leverages the advantages of deep learning, while also benefiting from the principled multi-frame fusion provided by the classical MAP formulation. We validate our approach through comprehensive experiments on burst denoising and burst super-resolution datasets. Our approach sets a new state-of-the-art for both tasks, demonstrating the generality and effectiveness of the proposed formulation.

dbsr overview figure [Classical multi-frame image restoration approaches minimize a reconstruction error between the observed images and the simulated images to obtain the output image y. In contrast, we employ an encoder E to compute the reconstruction error in a learned feature space. The reconstruction error is minimized w.r.t. a latent representation z, which is then passed through the decoder D to obtain the prediction y.]

Table of Contents

Installation

Clone the Git repository.

git clone https://github.com/goutamgmb/deep-rep.git

Install dependencies

Run the installation script to install all the dependencies. You need to provide the conda install path (e.g. ~/anaconda3) and the name for the created conda environment (here env-deeprep).

bash install.sh conda_install_path env-deeprep

This script will also download the default DeepRep networks and create default environment settings.

Update environment settings

The environment setting file admin/local.py contains the paths for pre-trained networks, datasets etc. Update the paths in local.py according to your local environment.

Toolkit Overview

The toolkit consists of the following sub-modules.

  • actors: Contains the actor classes for different trainings. The actor class is responsible for passing the input data through the network can calculating losses.
  • admin: Includes functions for loading networks, tensorboard etc. and also contains environment settings.
  • data: Contains functions for generating synthetic bursts, camera pipeline, processing data (e.g. loading images, data augmentations).
  • data_specs: Information about train/val splits of different datasets.
  • dataset: Contains integration of datasets such as BurstSR, SyntheticBurst, ZurichRAW2RGB, OpenImages, Grayscale denoising and Color denoising.
  • evaluation: Scripts to run and evaluate models on standard datasets.
  • external: External dependencies, e.g. PWCNet.
  • models: Contains different layers and network definitions.
  • train_settings: Default training settings for different models.
  • trainers: The main class which runs the training.
  • util_scripts: Util scripts to e.g. download datasets.
  • utils: General utility functions for e.g. plotting, data type conversions, loading networks.

Datasets

The toolkit provides integration for following datasets which can be used to train/evaluate the models.

Zurich RAW to RGB Canon set

The RGB images from the training split of the Zurich RAW to RGB mapping dataset can be used to generate synthetic bursts for training using the SyntheticBurstProcessing class in data/processing.py.

Preparation: Download the Zurich RAW to RGB canon set from here and unpack the zip folder. Set the zurichraw2rgb_dir variable in admin/local.py to point to the unpacked dataset directory.

SyntheticBurst validation set

The pre-generated synthetic validation set introduced in DBSR for the RAW burst super-resolution task. The dataset contains 300 synthetic bursts, each containing 14 RAW images. The synthetic bursts are generated from the RGB images from the test split of the Zurich RAW to RGB mapping dataset. The dataset can be loaded using SyntheticBurstVal class in dataset/synthetic_burst_val_set.py file.

Preparation: Download the dataset from here and unpack the zip file. Set the synburstval_dir variable in admin/local.py to point to the unpacked dataset directory.

BurstSR dataset (cropped)

The real-world BurstSR dataset introduced in DBSR for the RAW burst super-resolution task. The dataset contains RAW bursts captured from Samsung Galaxy S8 and corresponding HR ground truths captured using a DSLR camera. This is the pre-processed version of the dataset that contains roughly aligned crops from the original images. The dataset can be loaded using BurstSRDataset class in dataset/burstsr_dataset.py file. Please check the DBSR paper for more details.

Preparation: The dataset has been split into 10 parts and can be downloaded and unpacked using the util_scripts/download_burstsr_dataset.py script. Set the burstsr_dir variable in admin/local.py to point to the unpacked BurstSR dataset directory.

BurstSR dataset (full)

The real-world BurstSR dataset introduced in DBSR for the RAW burst super-resolution task. The dataset contains RAW bursts captured from Samsung Galaxy S8 and corresponding HR ground truths captured using a DSLR camera. This is the raw version of the dataset containing the full burst images in dng format.

Preparation: The dataset can be downloaded and unpacked using the util_scripts/download_raw_burstsr_data.py script.

OpenImages dataset

We use the RGB images from the OpenImages dataset to generate synthetic bursts when training the burst denoising models. The dataset can be loaded using OpenImagesDataset class in dataset/openimages_dataset.py file.

Preparation: Download the dataset from here. Set the openimages_dir variable in admin/local.py to point to the downloaded dataset directory.

Grayscale Burst Denoising test set

The pre-generated synthetic grayscale burst denoising test set introduced in KPN paper. The dataset can be loaded using GrayscaleDenoiseTestSet class in dataset/grayscale_denoise_test_set.py file.

Preparation: Download the dataset from here. Set the kpn_testset_path variable in admin/local.py to point to the downloaded file.

Color Burst Denoising test set

The pre-generated synthetic color burst denoising test set introduced in BPN paper. The dataset can be loaded using ColorDenoiseTestSet class in dataset/color_denoise_test_set.py file.

Preparation: Download the dataset from here and unpack the zip file. Set the bpn_color_testset_dir variable in admin/local.py to point to the unpacked dataset directory.

Evaluation

You can run the trained models on the included datasets and compute the quality of predictions using the evaluation module.

Note: Please prepare the necessary datasets as explained in Datasets section before running the models.

Evaluate on SyntheticBurst validation set

You can evaluate the models on SyntheticBurst validation set using evaluation/synburst package. First create an experiment setting in evaluation/synburst/experiments containing the list of models to evaluate. You can start with the provided setting deeprep_default.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from SyntheticBurst dataset by running

python evaluation/synburst/save_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear sensor color space (i.e. color space of input RAW burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) using the compute_score.py script

python evaluation/synburst/compute_score.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/synburst/compute_score.py EXPERIMENT_NAME --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, in sRGB format using the visualize_results.py script.

python evaluation/synburst/visualize_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. deeprep_default). The script will display the predictions of each model in sRGB format, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Evaluate on BurstSR validation set

You can evaluate the models on BurstSR validation set using evaluation/burstsr package. First create an experiment setting in evaluation/burstsr/experiments containing the list of models to evaluate. You can start with the provided setting deeprep_default.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from BurstSR val dataset by running

python evaluation/burstsr/save_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear sensor color space (i.e. color space of input RAW burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) after spatial and color alignment (see paper for details) using the compute_score.py script

python evaluation/burstsr/compute_score.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/burstsr/compute_score.py EXPERIMENT_NAME --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, in sRGB format using the visualize_results.py script.

python evaluation/burstsr/visualize_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. deeprep_default). The script will display the predictions of each model in sRGB format, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Evaluate on Grayscale and Color denoising test sets

You can evaluate the models on Grayscale and Color denoising test sets using evaluation/burst_denoise package. First create an experiment setting in evaluation/burst_denoise/experiments containing the list of models to evaluate. You can start with the provided setting deeprep_color.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from Grayscale/Color denoising datasets by running

python evaluation/burst_denoise/save_results.py EXPERIMENT_NAME MODE NOISE_LEVEL

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). MODE denotes which dataset to use (can be color or grayscale). NOISE_LEVEL denotes the noise level to use (can be 1, 2, 4, 8, or all). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear color space (i.e. color space of input burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) using the compute_score.py script

python evaluation/burst_denoise/compute_score.py EXPERIMENT_NAME MODE NOISE_LEVEL

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). MODE denotes which dataset to use (can be color or grayscale). NOISE_LEVEL denotes the noise level to use (can be 1, 2, 4, 8, or all). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/burst_denoise/compute_score.py EXPERIMENT_NAME MODE NOISE_LEVEL --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, using the visualize_results.py script.

python evaluation/burst_denoise/visualize_results.py EXPERIMENT_NAME MODE NOISE_LEVEL

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. deeprep_default). MODE denotes which dataset to use (can be color or grayscale). NOISE_LEVEL denotes the noise level to use (can be 1, 2, 4, 8, or all). The script will display the predictions of each model, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Model Zoo

Here, we provide pre-trained network weights and report their performance.

Note: The models have been retrained using the cleaned up code, and thus can have small performance differences compared to the models used for the paper.

SyntheticBurst models

The models are evaluated using all 14 burst images.

Model PSNR MS-SSIM LPIPS Links Notes
ICCV2021 41.56 0.964 0.045 - ICCV2021 results
deeprep_sr_synthetic_default 41.55 - - model Official retrained model
BurstSR models

The models are evaluated using all 14 burst images. The metrics are computed after spatial and color alignment, as described in DBSR paper.

Model PSNR MS-SSIM LPIPS Links Notes
ICCV2021 48.33 0.985 0.023 - ICCV2021 results
deeprep_sr_burstsr_default - - - model Official retrained model
Grayscale denoising models

The models are evaluated using all 8 burst images.

Model Gain 1 Gain 2 Gain 4 Gain 8 Links Notes
deeprep_denoise_grayscale_pwcnet 39.37 36.51 33.38 29.69 model Official retrained model
deeprep_denoise_grayscale_customflow 39.10 36.14 32.89 28.98 model Official retrained model
Color denoising models

The models are evaluated using all 8 burst images.

Model Gain 1 Gain 2 Gain 4 Gain 8 Links Notes
deeprep_denoise_color_pwcnet 42.21 39.13 35.75 32.52 model Official retrained model
deeprep_denoise_color_customflow 41.90 38.85 35.48 32.29 model Official retrained model

Training

You can train the models using the run_training.py script. Please download and set up the necessary datasets as described in Datasets section, before starting the trainings. You will also need a pre-trained PWC-Net model to start the trainings. The model is automatically downloaded from the install.sh script. You can also download it manually using

gdown https://drive.google.com/uc\?id\=1s11Ud1UMipk2AbZZAypLPRpnXOS9Y1KO -O pretrained_networks/pwcnet-network-default.pth

You can train a model using the following command

python run_training.py MODULE_NAME PARAM_NAME

Here, MODULE_NAME is the name of the training module (e.g. deeprep), while PARAM_NAME is the name of the parameter setting file (e.g. sr_synthetic_default). We provide the default training settings used to obtain the results in the ICCV paper.

Acknowledgement

The toolkit uses code from the following projects:

Owner
Goutam Bhat
Goutam Bhat
Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators This is our Pytorch implementation for t

RUCAIBox 12 Jul 22, 2022
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

LinkBERT: A Knowledgeable Language Model Pretrained with Document Links This repo provides the model, code & data of our paper: LinkBERT: Pretraining

Michihiro Yasunaga 264 Jan 01, 2023
Colab notebook for openai/glide-text2im.

GLIDE text2im on Colab This repository provides a Colab notebook to produce images conditioned on text prompts with GLIDE [1]. Usage Run text2im.ipynb

Wok 19 Oct 19, 2022
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI

U-Net for brain segmentation U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI based on a deep learning segmentation alg

562 Jan 02, 2023
TensorFlow implementation of Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction)

Barlow-Twins-TF This repository implements Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction) in TensorFlow and demonstrat

Sayak Paul 36 Sep 14, 2022
Parameter-ensemble-differential-evolution - Shows how to do parameter ensembling using differential evolution.

Ensembling parameters with differential evolution This repository shows how to ensemble parameters of two trained neural networks using differential e

Sayak Paul 9 May 04, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

43 Nov 19, 2022
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 04, 2023
PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction) by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Lo

Hengshuang Zhao 217 Oct 30, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

33 Jun 27, 2021
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

Ofir Press 138 Apr 15, 2022
Face Mask Detection on Image and Video using tensorflow and keras

Face-Mask-Detection Face Mask Detection on Image and Video using tensorflow and keras Train Neural Network on face-mask dataset using tensorflow and k

Nahid Ebrahimian 12 Nov 11, 2022
PyTorch implementation of EigenGAN

PyTorch Implementation of EigenGAN Train python train.py [image_folder_path] --name [experiment name] Test python test.py [ckpt path] --traverse FFH

62 Nov 12, 2022
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization This repository contains the code for the BBI optimizer, introduced in the p

G. Bruno De Luca 5 Sep 06, 2022
Cross-view Transformers for real-time Map-view Semantic Segmentation (CVPR 2022 Oral)

Cross View Transformers This repository contains the source code and data for our paper: Cross-view Transformers for real-time Map-view Semantic Segme

Brady Zhou 363 Dec 25, 2022
Learning-Augmented Dynamic Power Management

Learning-Augmented Dynamic Power Management This repository contains source code accompanying paper Learning-Augmented Dynamic Power Management with M

Adam 0 Feb 22, 2022