Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

Related tags

Deep LearningDFN
Overview

DFN:Distributed Feedback Network for Single-Image Deraining

Abstract

Recently, deep convolutional neural networks have achieved great success for single-image deraining. However, affected by the intrinsic overlapping between rain streaks and background texture patterns, a majority of these methods tend to almost remove texture details in rain-free regions and lead to over-smoothing effects in the recovered background. To generate reasonable rain streak layers and improve the reconstruction quality of the background, we propose a distributed feedback network (DFN) in recurrent structure. A novel feedback block is designed to implement the feedback mechanism. In each feedback block, the hidden state with high-level information (output) will flow into the next iteration to correct the low-level representations (input). By stacking multiple feedback blocks, the proposed network where the hidden states are distributed can extract powerful high-level representations for rain streak layers. Curriculum learning is employed to connect the loss of each iteration and ensure that hidden states contain the notion of output. In addition, a self-ensemble strategy for rain removal task, which can retain the approximate vertical character of rain streaks, is explored to maximize the potential performance of the deraining model. Extensive experimental results demonstrated the superiority of the proposed method in comparison with other deraining methods.

Image

Requirements

*Python 3.7,Pytorch >= 0.4.0
*Requirements: opencv-python
*Platforms: Ubuntu 18.04,cuda-10.2
*MATLAB for calculating PSNR and SSIM

Datasets

DFN is trained and tested on five benchamark datasets: Rain100L[1],Rain100H[1],RainLight[2],RainHeavy[2] and Rain12[3]. It should be noted that DFN is trained on strict 1,254 images for Rain100H.

*Note:

(i) The authors of [1] updated the Rain100L and Rain100H, we call the new datasets as RainLight and RainHeavy here.

(ii) The Rain12 contains only 12 pairs of testing images, we use the model trained on Rain100L to test on Rain12.

Getting Started

Test

All the pre-trained models were placed in ./logs/.

Run the test_DFN.py to obtain the deraining images. Then, you can calculate the evaluation metrics by run the MATLAB scripts in ./statistics/. For example, if you want to compute the average PSNR and SSIM on Rain100L, you can run the Rain100L.m.

Train

If you want to train the models, you can run the train_DFN.py and don't forget to change the args in this file. Or, you can run in the terminal by the following code:

python train_DFN.py --save_path path_to_save_trained_models --data_path path_of_the_training_dataset

Results

Average PSNR and SSIM values of DFN on five datasets are shown:

Datasets GMM DDN ResGuideNet JORDER-E SSIR PReNet BRN MSPFN DFN DFN+
Rain100L 28.66/0.865 32.16/0.936 33.16/0.963 - 32.37/0.926 37.48/0.979 38.16/0.982 37.5839/0.9784 39.22/0.985 39.85/0.987
Rain100H 15.05/0.425 21.92/0.764 25.25/0.841 - 22.47/0.716 29.62/0.901 30.73/0.916 30.8239/0.9055 31.40/0.926 31.81/0.930
RainLight - 31.66/0.922 - 39.13/0.985 32.20/0.929 37.93/0.983 38.86/0.985 39.7540/0.9862 39.53/0.987 40.12/0.988
RainHeavy - 22.03/0.713 - 29.21/0.891 22.17/0.719 29.36/0.903 30.27/0.917 30.7112/0.9129 31.07/0.927 31.47/0.931
Rain12 32.02/0.855 31.78/0.900 29.45/0.938 - 34.02/0.935 36.66/0.961 36.74/0.959 35.7780/0.9514 37.19/0.961 37.55/0.963

Image

References

[1]Yang W, Tan R, Feng J, Liu J, Guo Z, and Yan S. Deep joint rain detection and removal from a single image. In IEEE CVPR 2017.

[2]Yang W, Tan R, Feng J, Liu J, Yan S, and Guo Z. Joint rain detection and removal from a single image with contextualized deep networks. IEEE T-PAMI 2019.

[3]Li Y, Tan RT, Guo X, Lu J, and Brown M. Rain streak removal using layer priors. In IEEE CVPR 2016.

Citation

If you find our research or code useful for you, please cite our paper:

@article{DING2021,
  title = {Distributed Feedback Network for Single-Image Deraining},
  journal = {Information Sciences},
  year = {2021},
  issn = {0020-0255},
  doi = {https://doi.org/10.1016/j.ins.2021.02.080},
  url = {https://www.sciencedirect.com/science/article/pii/S0020025521002371},
  author = {Jiajun Ding and Huanlei Guo and Hang Zhou and Jun Yu and Xiongxiong He and Bo Jiang}
}
Owner
Zhejiang University of Technology(ZJUT). Research: Image Enhencement, Few-shot Learning, GAN.
This MVP data web app uses the Streamlit framework and Facebook's Prophet forecasting package to generate a dynamic forecast from your own data.

📈 Automated Time Series Forecasting Background: This MVP data web app uses the Streamlit framework and Facebook's Prophet forecasting package to gene

Zach Renwick 42 Jan 04, 2023
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

The first comprehensive Robustness investigation benchmark on large-scale dataset ImageNet regarding ARchitecture design and Training techniques towards diverse noises.

132 Dec 23, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
ReLoss - Official implementation for paper "Relational Surrogate Loss Learning" ICLR 2022

Relational Surrogate Loss Learning (ReLoss) Official implementation for paper "R

Tao Huang 31 Nov 22, 2022
Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph".

multilingual-mrc-isdg Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph". This r

Liyan 5 Dec 07, 2022
Official Implementation for the paper DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification

DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification Official Implementation for the pape

Anh M. Nguyen 36 Dec 28, 2022
Pytorch implementation of

EfficientTTS Unofficial Pytorch implementation of "EfficientTTS: An Efficient and High-Quality Text-to-Speech Architecture"(arXiv). Disclaimer: Somebo

Liu Songxiang 109 Nov 16, 2022
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

CIFS This repository provides codes for CIFS (ICML 2021). CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Sel

Hanshu YAN 19 Nov 12, 2022
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 05, 2023
(AAAI2022) Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Semantic Segmentation

SM-PPM This is a Pytorch implementation of our paper "Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Seman

W-zx-Y 10 Dec 07, 2022
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling @ INTERSPEECH 2021 Accepted

NU-Wave — Official PyTorch Implementation NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling Junhyeok Lee, Seungu Han @ MINDsLab Inc

MINDs Lab 242 Dec 23, 2022
Charsiu: A transformer-based phonetic aligner

Charsiu: A transformer-based phonetic aligner [arXiv] Note. This is a preview version. The aligner is under active development. New functions, new lan

jzhu 166 Dec 09, 2022
Implementation of MeMOT - Multi-Object Tracking with Memory - in Pytorch

MeMOT - Pytorch (wip) Implementation of MeMOT - Multi-Object Tracking with Memory - in Pytorch. This paper is just one in a line of work, but importan

Phil Wang 15 May 09, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 25 Sep 06, 2022
The Multi-Mission Maximum Likelihood framework (3ML)

PyPi Conda The Multi-Mission Maximum Likelihood framework (3ML) A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.

The Multi-Mission Maximum Likelihood (3ML) 62 Dec 30, 2022
The repo for the paper "I3CL: Intra- and Inter-Instance Collaborative Learning for Arbitrary-shaped Scene Text Detection".

I3CL: Intra- and Inter-Instance Collaborative Learning for Arbitrary-shaped Scene Text Detection Updates | Introduction | Results | Usage | Citation |

33 Jan 05, 2023
Asynchronous Advantage Actor-Critic in PyTorch

Asynchronous Advantage Actor-Critic in PyTorch This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learn

Reiji Hatsugai 38 Dec 12, 2022
Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

T. Andy Keller 69 Dec 12, 2022
quantize aware training package for NCNN on pytorch

ncnnqat ncnnqat is a quantize aware training package for NCNN on pytorch. Table of Contents ncnnqat Table of Contents Installation Usage Code Examples

62 Nov 23, 2022
Project page for End-to-end Recovery of Human Shape and Pose

End-to-end Recovery of Human Shape and Pose Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik CVPR 2018 Project Page Requirements Pyt

1.4k Dec 29, 2022