Exploring Image Deblurring via Blur Kernel Space (CVPR'21)

Overview

Exploring Image Deblurring via Encoded Blur Kernel Space

About the project

We introduce a method to encode the blur operators of an arbitrary dataset of sharp-blur image pairs into a blur kernel space. Assuming the encoded kernel space is close enough to in-the-wild blur operators, we propose an alternating optimization algorithm for blind image deblurring. It approximates an unseen blur operator by a kernel in the encoded space and searches for the corresponding sharp image. Due to the method's design, the encoded kernel space is fully differentiable, thus can be easily adopted in deep neural network models.

Blur kernel space

Detail of the method and experimental results can be found in our following paper:

@inproceedings{m_Tran-etal-CVPR21, 
  author = {Phong Tran and Anh Tran and Quynh Phung and Minh Hoai}, 
  title = {Explore Image Deblurring via Encoded Blur Kernel Space}, 
  year = {2021}, 
  booktitle = {Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)} 
}

Please CITE our paper whenever this repository is used to help produce published results or incorporated into other software.

Open In Colab

Table of Content

Getting started

Prerequisites

  • Python >= 3.7
  • Pytorch >= 1.4.0
  • CUDA >= 10.0

Installation

git clone https://github.com/VinAIResearch/blur-kernel-space-exploring.git
cd blur-kernel-space-exploring


conda create -n BlurKernelSpace -y python=3.7
conda activate BlurKernelSpace
conda install --file requirements.txt

Training and evaluation

Preparing datasets

You can download the datasets in the model zoo section.

To use your customized dataset, your dataset must be organized as follow:

root
├── blur_imgs
    ├── 000
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...
    ├── 001
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...
├── sharp_imgs
    ├── 000
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...
    ├── 001
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...

where root, blur_imgs, and sharp_imgs folders can have arbitrary names. For example, let root, blur_imgs, sharp_imgs be REDS, train_blur, train_sharp respectively (That is, you are using the REDS training set), then use the following scripts to create the lmdb dataset:

python create_lmdb.py --H 720 --W 1280 --C 3 --img_folder REDS/train_sharp --name train_sharp_wval --save_path ../datasets/REDS/train_sharp_wval.lmdb
python create_lmdb.py --H 720 --W 1280 --C 3 --img_folder REDS/train_blur --name train_blur_wval --save_path ../datasets/REDS/train_blur_wval.lmdb

where (H, C, W) is the shape of the images (note that all images in the dataset must have the same shape), img_folder is the folder that contains the images, name is the name of the dataset, and save_path is the save destination (save_path must end with .lmdb).

When the script is finished, two folders train_sharp_wval.lmdb and train_blur_wval.lmdb will be created in ./REDS.

Training

To do image deblurring, data augmentation, and blur generation, you first need to train the blur encoding network (The F function in the paper). This is the only network that you need to train. After creating the dataset, change the value of dataroot_HQ and dataroot_LQ in options/kernel_encoding/REDS/woVAE.yml to the paths of the sharp and blur lmdb datasets that were created before, then use the following script to train the model:

python train.py -opt options/kernel_encoding/REDS/woVAE.yml

where opt is the path to yaml file that contains training configurations. You can find some default configurations in the options folder. Checkpoints, training states, and logs will be saved in experiments/modelName. You can change the configurations (learning rate, hyper-parameters, network structure, etc) in the yaml file.

Testing

Data augmentation

To augment a given dataset, first, create an lmdb dataset using scripts/create_lmdb.py as before. Then use the following script:

python data_augmentation.py --target_H=720 --target_W=1280 \
			    --source_H=720 --source_W=1280\
			    --augmented_H=256 --augmented_W=256\
                            --source_LQ_root=datasets/REDS/train_blur_wval.lmdb \
                            --source_HQ_root=datasets/REDS/train_sharp_wval.lmdb \
			    --target_HQ_root=datasets/REDS/test_sharp_wval.lmdb \
                            --save_path=results/GOPRO_augmented \
                            --num_images=10 \
                            --yml_path=options/data_augmentation/default.yml

(target_H, target_W), (source_H, source_W), and (augmented_H, augmented_W) are the desired shapes of the target images, source images, and augmented images respectively. source_LQ_root, source_HQ_root, and target_HQ_root are the paths of the lmdb datasets for the reference blur-sharp pairs and the input sharp images that were created before. num_images is the size of the augmented dataset. model_path is the path of the trained model. yml_path is the path to the model configuration file. Results will be saved in save_path.

Data augmentation examples

Generate novel blur kernels

To generate a blur image given a sharp image, use the following command:

python generate_blur.py --yml_path=options/generate_blur/default.yml \
		        --image_path=imgs/sharp_imgs/mushishi.png \
			--num_samples=10
			--save_path=./res.png

where model_path is the path of the pre-trained model, yml_path is the path of the configuration file. image_path is the path of the sharp image. After running the script, a blur image corresponding to the sharp image will be saved in save_path. Here is some expected output: kernel generating examples Note: This only works with models that were trained with --VAE flag. The size of input images must be divisible by 128.

Generic Deblurring

To deblur a blurry image, use the following command:

python generic_deblur.py --image_path imgs/blur_imgs/blur1.png --yml_path options/generic_deblur/default.yml --save_path ./res.png

where image_path is the path of the blurry image. yml_path is the path of the configuration file. The deblurred image will be saved to save_path.

Image deblurring examples

Deblurring using sharp image prior

First, you need to download the pre-trained styleGAN or styleGAN2 networks. If you want to use styleGAN, download the mapping and synthesis networks, then rename and copy them to experiments/pretrained/stylegan_mapping.pt and experiments/pretrained/stylegan_synthesis.pt respectively. If you want to use styleGAN2 instead, download the pretrained model, then rename and copy it to experiments/pretrained/stylegan2.pt.

To deblur a blurry image using styleGAN latent space as the sharp image prior, you can use one of the following commands:

python domain_specific_deblur.py --input_dir imgs/blur_faces \
		    --output_dir experiments/domain_specific_deblur/results \
		    --yml_path options/domain_specific_deblur/stylegan.yml  # Use latent space of stylegan
python domain_specific_deblur.py --input_dir imgs/blur_faces \
		    --output_dir experiments/domain_specific_deblur/results \
		    --yml_path options/domain_specific_deblur/stylegan2.yml  # Use latent space of stylegan2

Results will be saved in experiments/domain_specific_deblur/results. Note: Generally, the code still works with images that have the size divisible by 128. However, since our blur kernels are not uniform, the size of the kernel increases as the size of the image increases.

PULSE-like Deblurring examples

Model Zoo

Pretrained models and corresponding datasets are provided in the below table. After downloading the datasets and models, follow the instructions in the testing section to do data augmentation, generating blur images, or image deblurring.

Model name dataset(s) status
REDS woVAE REDS ✔️
GOPRO woVAE GOPRO ✔️
GOPRO wVAE GOPRO ✔️
GOPRO + REDS woVAE GOPRO, REDS ✔️

Notes and references

The training code is borrowed from the EDVR project: https://github.com/xinntao/EDVR

The backbone code is borrowed from the DeblurGAN project: https://github.com/KupynOrest/DeblurGAN

The styleGAN code is borrowed from the PULSE project: https://github.com/adamian98/pulse

The stylegan2 code is borrowed from https://github.com/rosinality/stylegan2-pytorch

Owner
VinAI Research
VinAI Research
A deep learning object detector framework written in Python for supporting Land Search and Rescue Missions.

AIR: Aerial Inspection RetinaNet for supporting Land Search and Rescue Missions AIR is a deep learning based object detection solution to automate the

Accenture 13 Dec 22, 2022
Pytorch ImageNet1k Loader with Bounding Boxes.

ImageNet 1K Bounding Boxes For some experiments, you might wanna pass only the background of imagenet images vs passing only the foreground. Here, I'v

Amin Ghiasi 11 Oct 15, 2022
Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.

Deformable Butterfly: A Highly Structured and Sparse Linear Transform DeBut Advantages DeBut generalizes the square power of two butterfly factor matr

Rui LIN 8 Jun 10, 2022
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
Implements Gradient Centralization and allows it to use as a Python package in TensorFlow

Gradient Centralization TensorFlow This Python package implements Gradient Centralization in TensorFlow, a simple and effective optimization technique

Rishit Dagli 101 Nov 01, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
code from "Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity"

Code associated with the paper "Tensor decomposition of higher-order correlations by nonlinear Hebbian learning," Ocker & Buice, Neurips 2021. "plot_f

Gabriel Koch Ocker 4 Oct 16, 2022
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
Multi-Task Learning as a Bargaining Game

Nash-MTL Official implementation of "Multi-Task Learning as a Bargaining Game". Setup environment conda create -n nashmtl python=3.9.7 conda activate

Aviv Navon 87 Dec 26, 2022
Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" (NeurIPS'20)

IGNN Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" [paper] [supp] Prepare datasets 1 Download training dataset

Shangchen Zhou 278 Jan 03, 2023
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains a PyTorch implementation for the paper Score-Based Genera

Yang Song 757 Jan 04, 2023
COVID-Net Open Source Initiative

The COVID-Net models provided here are intended to be used as reference models that can be built upon and enhanced as new data becomes available

Linda Wang 1.1k Dec 26, 2022
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022
SimpleDepthEstimation - An unified codebase for NN-based monocular depth estimation methods

SimpleDepthEstimation Introduction This is an unified codebase for NN-based monocular depth estimation methods, the framework is based on detectron2 (

8 Dec 13, 2022
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

102 Dec 05, 2022
Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022