Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Overview

PWC PWC PWC PWC

PWC PWC PWC PWC PWC

PWC PWC

Restormer: Efficient Transformer for High-Resolution Image Restoration

Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang

Paper: https://arxiv.org/abs/2111.09881

News

  • Testing codes and pre-trained models are released!

Abstract: Since convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data, these models have been extensively applied to image restoration and related tasks. Recently, another class of neural architectures, Transformers, have shown significant performance gains on natural language and high-level vision tasks. While the Transformer model mitigates the shortcomings of CNNs (i.e., limited receptive field and inadaptability to input content), its computational complexity grows quadratically with the spatial resolution, therefore making it infeasible to apply to most image restoration tasks involving high-resolution images. In this work, we propose an efficient Transformer model by making several key designs in the building blocks (multi-head attention and feed-forward network) such that it can capture long-range pixel interactions, while still remaining applicable to large images. Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks, including image deraining, single-image motion deblurring, defocus deblurring (single-image and dual-pixel data), and image denoising (Gaussian grayscale/color denoising, and real image denoising).


Network Architecture

Installation

The model is built in PyTorch 1.8.1 and tested on Ubuntu 16.04 environment (Python3.7, CUDA10.2, cuDNN7.6).

For installing, follow these intructions

conda create -n pytorch181 python=3.7
conda activate pytorch181
conda install pytorch=1.8 torchvision cudatoolkit=10.2 -c pytorch
pip install matplotlib scikit-learn scikit-image opencv-python yacs joblib natsort h5py tqdm

Results

Image Deraining comparisons on the Test100, Rain100H, Rain100L, Test1200, and Test2800 testsets. You can download Restormer's predictions from this Google Drive link


Single-Image Motion Deblurring results. Our Restormer is trained only on the GoPro dataset and directly applied to the HIDE and RealBlur benchmark datasets. You can download Restormer's predictions from this Google Drive link


Defocus Deblurring comparisons on the DPDD testset (containing 37 indoor and 39 outdoor scenes). S: single-image defocus deblurring. D: dual-pixel defocus deblurring. You can download Restormer's predictions from this Google Drive link


Gaussian Image Denoising comparisons for two categories of methods. Top super row: learning a single model to handle various noise levels. Bottom super row: training a separate model for each noise level. You can download Restormer's predictions from this Google Drive link

Grayscale

Color

Real Image Denoising on SIDD and DND datasets. ∗ denotes methods using additional training data. Our Restormer is trained only on the SIDD images and directly tested on DND. You can download Restormer's predictions from this Google Drive link

Citation

If you use Restormer, please consider citing:

@article{Zamir2021Restormer,
    title={Restormer: Efficient Transformer for High-Resolution Image Restoration}, 
    author={Syed Waqas Zamir and Aditya Arora and Salman Khan and Munawar Hayat 
            and Fahad Shahbaz Khan and Ming-Hsuan Yang},
    journal={ArXiv 2111.09881},
    year={2021}
}

Contact

Should you have any question, please contact [email protected]

Comments
  • Problems about training Deraining

    Problems about training Deraining

    Hi,Congratulations to you have a good job! Although I haved changed the number of GPUs in train.sh and Deraining_Restormer.yml to 4 since I only have 4 GPUs,I can't train the code of Deraining due to my GPU memory limitations. I found the program can run if I change the batch_size_per_gpu smaller. But the batch size can't meet the experimental settings. So what can I do if I want to achieve the settings in your experiment ( i.e. For progressive learning, we start training with patch size 128×128 and batch size 64. The patch size and batch size pairs are updated to [(160^2,40), (192^2,32), (256^2,16), (320^2,8),(384^2,8)] at iterations [92K, 156K, 204K, 240K, 276K].) ?

    opened by Lucky0775 5
  • colab?

    colab?

    I am pleased with your work; the level of completeness is really professional! Do you guys have any plan to release the code for Google Colab? Unfortunately, I can't run the code on my local machine due to some poor factors.

    opened by osushilover 5
  • Questions about the quantitative results of other methods?

    Questions about the quantitative results of other methods?

    Hi, How are the quantitative results calculated for the other methods in Restormer Table 1? Are you quoting their results directly or are you retraining them?

    Looking forward to your reply. Thank you!

    opened by C-water 3
  • Typical GPU memory requirements for training?

    Typical GPU memory requirements for training?

    I was trying to run training Restormer, and succeed to run it with 128x128 size.

    However my GPU memory runs out when trying to train the network with 256x256 size and a batch size larger than 2. My GPU is RTX3080 with 10GB memory.

    Do you know how much memory we need to train it on 256x256 size patch and batch size >= 8 ?

    opened by wonwoolee 3
  • Motion Debluring Train

    Motion Debluring Train

    Hi.Thank you so much for your open source work. When I trained motion_deblur, I found that the effect in the paper could not be achieved.

    1. I followed the dependency tutorial mentioned in the repository ,downloaded the gopro dataset, and used the provided crop method to prepare the training set and validation set.
    2. And use the Deblurring_Restormer.yml configuration file for training. In the configuration file I modified to use single GPU training.
    3. In another experiment, I modified the training strategy to fix the crop size to 128. But the results of both experiments were less than 31db, which was much lower than the results in the paper. I wonder if details are missing and why the results are so different.
    opened by niehen6174 3
  • About the training

    About the training

    How to solve the error of create_dataloader, create_dataset in init.py in the train.py file? Also what is the difference between training on basicsr documents and training on specific tasks (e.g. Deraining)?

    opened by SunYJLU 3
  • problem on the step ”Install gdrive using“

    problem on the step ”Install gdrive using“

    Dear author,I met a problem when input the code "go get github.com/prasmussen/gdrive"

    package golang.org/x/oauth2/google: unrecognized import path "golang.org/x/oauth2/google" (https fetch: Get https://golang.org/x/oauth2/google?go-get=1: dial tcp 172.217.163.49:443: i/o timeout)

    I want to know how to solve this.THANKS!

    opened by ZYQii 3
  • add model to Huggingface

    add model to Huggingface

    Hi, would you be interested in adding Restormer to Hugging Face Hub? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. We can setup an organization or a user account under which restormer can be added similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/akhaliq/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 3
  •  denoising training dataset

    denoising training dataset

    well done ! But can you tell me about your denoising-working ,what dataset your used? real training dataset and Gaussian Denoising dataset. Thank you very much!

    opened by 17346604401 3
  • some problems

    some problems

    Since no training code is given, I write my own training program to train Restormer. However, at the beginning of the training, I could only set batchsize to 48 due to the limitation of GPUs memory. However, I found that the loss would hardly decrease when the first 10,000 to 20,000 iteration was carried out, which verified that the PSNR remained unchanged at about 26.2. Is the training relatively slow, or what is the problem? And if prob, I would like to know the upward trend of Val PSNR and the downward trend of loss during your training

    opened by jiaaihhy 3
  • Would you inform about the wide-shallow network?

    Would you inform about the wide-shallow network?

    Hello,

    In the ablation study, you compared deeper vs wider Restormer. I'm wondering about the wider Restormer you mentioned, so could you inform me of the details of it?

    opened by amoeba04 2
  • About lr_scheduler.py

    About lr_scheduler.py

    Hi ! In lr_scheduler.py, from torch.optim.lr_scheduler import _LRScheduler The message Cannot find reference '_LRScheduler' in 'lr_scheduler.pyi' How can I solve this problem?

    opened by Spacei567 3
  • Question about training denoising model

    Question about training denoising model

    I followed the instructions and conducted 2 Gaussian color image denoising experiments where sigma=15 and 50. But I can't reproduce the same PSNR value as paper shows. Here are my results: sigma=15 For CBSD68 dataset Noise Level 15 PSNR: 34.398237 For Kodak dataset Noise Level 15 PSNR: 35.439437 For McMaster dataset Noise Level 15 PSNR: 35.556497 For Urban100 dataset Noise Level 15 PSNR: 35.058984 sigma=50 For CBSD68 dataset Noise Level 50 PSNR: 28.586302 For Kodak dataset Noise Level 50 PSNR: 29.967525 For McMaster dataset Noise Level 50 PSNR: 30.237451 For Urban100 dataset Noise Level 50 PSNR: 29.891585

    Did I miss some important details?

    opened by Andrew0613 0
  • About PSNR of IFAN in defocus deblurring tasks (DPDD datasets).

    About PSNR of IFAN in defocus deblurring tasks (DPDD datasets).

    Hi, Did you retrain the IFAN on DPDD? IFAN only provided the results from 8bit images, which is inconsistent with the results in this paper.
    I guess you have retrained IFAN. If convenient, could you please provide the test pictures?

    Thank you very much!

    opened by C-water 0
  • About training, NCCL

    About training, NCCL

    RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554786529/work/torch/lib/c10d/ProcessGroupNCCL.cpp:33, unhandled cuda error, NCCL version 2.7.8 ncclUnhandledCudaError: Call to CUDA function failed.

    How can i fix it????? Plz help!

    opened by jjjjzyyyyyy 1
  • About training

    About training

    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.

    opened by jjjjzyyyyyy 0
  • About the Gaussian color image denoising results.

    About the Gaussian color image denoising results.

    Hi, there is a question about the Gaussian color image denoising results on the Kodak24 dataset. I have downloaded the provided pre-trained models and use them for testing, under the provided code base and environment. However, I can not get the similar results on Kodak24 as you have reported in Table 5 of the main paper. In fact, I get lower PSNR values of testing on Kodak24 (e,g,. -0.12 dB for sigma15, -0.11 dB of sigma25, -0.14 dB of sigma 50). Can you give some explanations or suggestions? Thanks very much.

    opened by gladzhang 0
Owner
Syed Waqas Zamir
Research Scientist
Syed Waqas Zamir
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
Train a deep learning net with OpenStreetMap features and satellite imagery.

DeepOSM Classify roads and features in satellite imagery, by training neural networks with OpenStreetMap (OSM) data. DeepOSM can: Download a chunk of

TrailBehind, Inc. 1.3k Nov 24, 2022
RefineGNN - Iterative refinement graph neural network for antibody sequence-structure co-design (RefineGNN)

Iterative refinement graph neural network for antibody sequence-structure co-des

Wengong Jin 83 Dec 31, 2022
Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution.

convolver Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution. Created by Sean Higley

Sean Higley 1 Feb 23, 2022
v objective diffusion inference code for JAX.

v-diffusion-jax v objective diffusion inference code for JAX, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman). The models

Katherine Crowson 186 Dec 21, 2022
Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.

MAUVE MAUVE is a library built on PyTorch and HuggingFace Transformers to measure the gap between neural text and human text with the eponymous MAUVE

Krishna Pillutla 182 Jan 02, 2023
Accompanying code for the paper "A Kernel Test for Causal Association via Noise Contrastive Backdoor Adjustment".

#backdoor-HSIC (bd_HSIC) Accompanying code for the paper "A Kernel Test for Causal Association via Noise Contrastive Backdoor Adjustment". To generate

Robert Hu 0 Nov 25, 2021
DCGAN-tensorflow - A tensorflow implementation of Deep Convolutional Generative Adversarial Networks

DCGAN in Tensorflow Tensorflow implementation of Deep Convolutional Generative Adversarial Networks which is a stabilize Generative Adversarial Networ

Taehoon Kim 7.1k Dec 29, 2022
Gradient Inversion with Generative Image Prior

Gradient Inversion with Generative Image Prior This repository is an implementation of "Gradient Inversion with Generative Image Prior", accepted to N

MLLab @ Postech 25 Jan 09, 2023
"Domain Adaptive Semantic Segmentation without Source Data" (ACM MM 2021)

LDBE Pytorch implementation for two papers (the paper will be released soon): "Domain Adaptive Semantic Segmentation without Source Data", ACM MM2021.

benfour 16 Sep 28, 2022
Match SafeGraph POIs with Data collected through a cultural resource survey in Washington DC.

Match SafeGraph POI data with Cultural Resource Places in Washington DC Match SafeGraph POIs with Data collected through a cultural resource survey in

Changjie Chen 1 Jan 05, 2022
Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation

Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation Our paper is accepted by ICCV2021. Picture: Overview of the proposed Plug-an

Yunfei Liu 32 Dec 10, 2022
Code-free deep segmentation for computational pathology

NoCodeSeg: Deep segmentation made easy! This is the official repository for the manuscript "Code-free development and deployment of deep segmentation

André Pedersen 26 Nov 23, 2022
U-Net for GBM

My Final Year Project(FYP) In National University of Singapore(NUS) You need Pytorch(stable 1.9.1) Both cuda version and cpu version are OK File Str

PinkR1ver 1 Oct 27, 2021
The official implementation of paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks" (IJCV under review).

DGMS This is the code of the paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks". Installation Our code works with Pytho

Runpei Dong 3 Aug 28, 2022
Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Bowen XU 11 Dec 20, 2022
SAS: Self-Augmentation Strategy for Language Model Pre-training

SAS: Self-Augmentation Strategy for Language Model Pre-training This repository

Alibaba 5 Nov 02, 2022
NEG loss implemented in pytorch

Pytorch Negative Sampling Loss Negative Sampling Loss implemented in PyTorch. Usage neg_loss = NEG_loss(num_classes, embedding_size) optimizer =

Daniil Gavrilov 123 Sep 13, 2022
Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition - NeurIPS2021

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition Project Page | Video | Paper Implementation for Neural-PIL. A novel method wh

Computergraphics (University of Tübingen) 64 Dec 29, 2022