Revisiting Global Statistics Aggregation for Improving Image Restoration

Related tags

Deep Learningtlsc
Overview

PWC PWC

Revisiting Global Statistics Aggregation for Improving Image Restoration

Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu

Paper: https://arxiv.org/pdf/2112.04491.pdf

Introduction

This repository is an official implementation of the TLSC. We propose Test-time Local Statistics Converter (TLSC), which replaces the statistic aggregation region from the entire spatial dimension to the local window, to mitigate the issue between training and testing. Our approach has no requirement of retraining or finetuning, and only induces marginal extra costs.

arch

Illustration of training and testing schemes of image restoration. From left to right: image from the dataset; input for the restorer (patches or entire-image depend on the scheme); aggregating statistics from the feature map. For (a), (b), and (c), statistics are aggregated along the entire spatial dimension. (d) Ours, statistics are aggregated in a local region for each pixel.

Abstract

Global spatial statistics, which are aggregated along entire spatial dimensions, are widely used in top-performance image restorers. For example, mean, variance in Instance Normalization (IN) which is adopted by HINet, and global average pooling (ie, mean) in Squeeze and Excitation (SE) which is applied to MPRNet. This paper first shows that statistics aggregated on the patches-based/entire-image-based feature in the training/testing phase respectively may distribute very differently and lead to performance degradation in image restorers. It has been widely overlooked by previous works. To solve this issue, we propose a simple approach, Test-time Local Statistics Converter (TLSC), that replaces the region of statistics aggregation operation from global to local, only in the test time. Without retraining or finetuning, our approach significantly improves the image restorer's performance. In particular, by extending SE with TLSC to the state-of-the-art models, MPRNet boost by 0.65 dB in PSNR on GoPro dataset, achieves 33.31 dB, exceeds the previous best result 0.6 dB. In addition, we simply apply TLSC to the high-level vision task, ie, semantic segmentation, and achieves competitive results. Extensive quantity and quality experiments are conducted to demonstrate TLSC solves the issue with marginal costs while significant gain.

Usage

Installation

This implementation based on BasicSR which is a open source toolbox for image/video restoration tasks.

git clone https://github.com/megvii-research/tlsc.git
cd tlsc
pip install -r requirements.txt
python setup.py develop

Quick Start (Single Image Inference)

Main Results

Method GoPro GoPro HIDE HIDE REDS REDS
PSNR SSIM PSNR SSIM PSNR SSIM
HINet 32.71 0.959 30.33 0.932 28.83 0.863
HINet-local (ours) 33.08 0.962 30.66 0.936 28.96 0.865
MPRNet 32.66 0.959 30.96 0.939 - -
MPRNet-local (ours) 33.31 0.964 31.19 0.942 - -

Evaluation

Image Deblur - GoPro dataset (Click to expand)
  • prepare data

    • mkdir ./datasets/GoPro

    • download the test set in ./datasets/GoPro/test (refer to MPRNet)

    • it should be like:

      ./datasets/
      ./datasets/GoPro/test/
      ./datasets/GoPro/test/input/
      ./datasets/GoPro/test/target/
  • eval

    • download pretrained HINet to ./experiments/pretrained_models/HINet-GoPro.pth

    • python basicsr/test.py -opt options/test/HIDE/MPRNetLocal-HIDE.yml

    • download pretrained MPRNet to ./experiments/pretrained_models/MPRNet-GoPro.pth

    • python basicsr/test.py -opt options/test/HIDE/MPRNetLocal-HIDE.yml

Image Deblur - HIDE dataset (Click to expand)
  • prepare data

    • mkdir ./datasets/HIDE

    • download the test set in ./datasets/HIDE/test (refer to MPRNet)

    • it should be like:

      ./datasets/
      ./datasets/HIDE/test/
      ./datasets/HIDE/test/input/
      ./datasets/HIDE/test/target/
  • eval

    • download pretrained HINet to ./experiments/pretrained_models/HINet-GoPro.pth

    • python basicsr/test.py -opt options/test/GoPro/MPRNetLocal-GoPro.yml

    • download pretrained MPRNet to ./experiments/pretrained_models/MPRNet-GoPro.pth

    • python basicsr/test.py -opt options/test/GoPro/MPRNetLocal-GoPro.yml

Image Deblur - REDS dataset (Click to expand)
  • prepare data

    • mkdir ./datasets/REDS

    • download the val set from val_blur, val_sharp to ./datasets/REDS/ and unzip them.

    • it should be like

      ./datasets/
      ./datasets/REDS/
      ./datasets/REDS/val/
      ./datasets/REDS/val/val_blur_jpeg/
      ./datasets/REDS/val/val_sharp/
      
    • python scripts/data_preparation/reds.py

      • flatten the folders and extract 300 validation images.
  • eval

    • download pretrained HINet to ./experiments/pretrained_models/HINet-REDS.pth
    • python basicsr/test.py -opt options/test/REDS/HINetLocal-REDS.yml

Tricks: Change the 'fast_imp: false' (naive implementation) to 'fast_imp: true' (faster implementation) in MPRNetLocal config can achieve faster inference speed.

License

This project is under the MIT license, and it is based on BasicSR which is under the Apache 2.0 license.

Citations

If TLSC helps your research or work, please consider citing TLSC.

@article{chu2021tlsc,
  title={Revisiting Global Statistics Aggregation for Improving Image Restoration},
  author={Chu, Xiaojie and Chen, Liangyu and and Chen, Chengpeng and Lu, Xin},
  journal={arXiv preprint arXiv:2112.04491},
  year={2021}
}

Contact

If you have any questions, please contact [email protected] or [email protected].

Owner
MEGVII Research
Power Human with AI. 持续创新拓展认知边界 非凡科技成就产品价值
MEGVII Research
Implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN This is an unofficial implementation of SinGAN from someone who's been sitting right next to SinGAN's creator for almost five years. Please ref

35 Nov 10, 2022
Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

IITiS PAN 2 Dec 16, 2021
Flow is a computational framework for deep RL and control experiments for traffic microsimulation.

Flow Flow is a computational framework for deep RL and control experiments for traffic microsimulation. See our website for more information on the ap

867 Jan 02, 2023
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
Recommendation algorithms for large graphs

Fast recommendation algorithms for large graphs based on link analysis. License: Apache Software License Author: Emmanouil (Manios) Krasanakis Depende

Multimedia Knowledge and Social Analytics Lab 27 Jan 07, 2023
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 08, 2023
Unified file system operation experience for different backend

megfile - Megvii FILE library Docs: http://megvii-research.github.io/megfile megfile provides a silky operation experience with different backends (cu

MEGVII Research 76 Dec 14, 2022
Neon: an add-on for Lightbulb making it easier to handle component interactions

Neon Neon is an add-on for Lightbulb making it easier to handle component interactions. Installation pip install git+https://github.com/neonjonn/light

Neon Jonn 9 Apr 29, 2022
FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks

FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks This is our implementation for the paper: FinGAT: A Financial Graph At

Yu-Che Tsai 64 Dec 13, 2022
YOLOv2 in PyTorch

YOLOv2 in PyTorch NOTE: This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0). This is a PyTorch implement

Long Chen 1.5k Jan 02, 2023
OcclusionFusion: realtime dynamic 3D reconstruction based on single-view RGB-D

OcclusionFusion (CVPR'2022) Project Page | Paper | Video Overview This repository contains the code for the CVPR 2022 paper OcclusionFusion, where we

Wenbin Lin 193 Dec 15, 2022
Source code for From Stars to Subgraphs

GNNAsKernel Official code for From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness Visualizations GNN-AK(+) GNN-AK(+) with Subgra

44 Dec 19, 2022
Neural network for stock price prediction

neural_network_for_stock_price_prediction Neural networks for stock price predic

2 Feb 04, 2022
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures.

nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures. Here you will find the scripts necessary to produce th

Jesse Willis 0 Jan 20, 2022
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment This is a pytorch project for the paper Seeing Dynamic Scene i

DV Lab 21 Nov 28, 2022
PlenOctree Extraction algorithm

PlenOctrees_NeRF-SH This is an implementation of the Paper PlenOctrees for Real-time Rendering of Neural Radiance Fields. Not only the code provides t

49 Nov 05, 2022
How to Predict Stock Prices Easily Demo

How-to-Predict-Stock-Prices-Easily-Demo How to Predict Stock Prices Easily - Intro to Deep Learning #7 by Siraj Raval on Youtube ##Overview This is th

Siraj Raval 752 Nov 16, 2022
This code is an unofficial implementation of HiFiSinger.

HiFiSinger This code is an unofficial implementation of HiFiSinger. The algorithm is based on the following papers: Chen, J., Tan, X., Luan, J., Qin,

Heejo You 87 Dec 23, 2022
Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"

merlot_reserve Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound" MERLOT Reserve (in submission) is a mo

Rowan Zellers 92 Dec 11, 2022