Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Overview

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

This repository is the official PyTorch implementation of Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (arxiv, supp).

🚀 🚀 🚀 News:


Normalizing flows have recently demonstrated promising results for low-level vision tasks. For image super-resolution (SR), it learns to predict diverse photo-realistic high-resolution (HR) images from the low-resolution (LR) image rather than learning a deterministic mapping. For image rescaling, it achieves high accuracy by jointly modelling the downscaling and upscaling processes. While existing approaches employ specialized techniques for these two tasks, we set out to unify them in a single formulation. In this paper, we propose the hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling. More specifically, HCFlow learns a bijective mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously. In particular, the high-frequency component is conditional on the LR image in a hierarchical manner. To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training. Extensive experiments on general image SR, face image SR and image rescaling have demonstrated that the proposed HCFlow achieves state-of-the-art performance in terms of both quantitative metrics and visual quality.

         

Requirements

  • Python 3.7, PyTorch == 1.7.1
  • Requirements: opencv-python, lpips, natsort, etc.
  • Platforms: Ubuntu 16.04, cuda-11.0
cd HCFlow-master
pip install -r requirements.txt 

Quick Run (takes 1 Minute)

To run the code with one command (without preparing data), run this command:

cd codes
# face image SR
python test_HCFLow.py --opt options/test/test_SR_CelebA_8X_HCFlow.yml

# general image SR
python test_HCFLow.py --opt options/test/test_SR_DF2K_4X_HCFlow.yml

# image rescaling
python test_HCFLow.py --opt options/test/test_Rescaling_DF2K_4X_HCFlow.yml

Data Preparation

The framework of this project is based on MMSR and SRFlow. To prepare data, put training and testing sets in ./datasets as ./datasets/DIV2K/HR/0801.png. Commonly used SR datasets can be downloaded here. There are two ways for accerleration in data loading: First, one can use ./scripts/png2npy.py to generate .npy files and use data/GTLQnpy_dataset.py. Second, one can use .pklv4 dataset (recommended) and use data/LRHR_PKL_dataset.py. Please refer to SRFlow for more details. Prepared datasets can be downloaded here.

Training

To train HCFlow for general image SR/ face image SR/ image rescaling, run this command:

cd codes

# face image SR
python train_HCFLow.py --opt options/train/train_SR_CelebA_8X_HCFlow.yml

# general image SR
python train_HCFLow.py --opt options/train/train_SR_DF2K_4X_HCFlow.yml

# image rescaling
python train_HCFLow.py --opt options/train/train_Rescaling_DF2K_4X_HCFlow.yml

All trained models can be downloaded from here.

Testing

Please follow the Quick Run section. Just modify the dataset path in test_HCFlow_*.yml.

Results

We achieved state-of-the-art performance on general image SR, face image SR and image rescaling.

For more results, please refer to the paper and supp for details.

Citation

@inproceedings{liang21hcflow,
  title={Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling},
  author={Liang, Jingyun and Lugmayr, Andreas and Zhang, Kai and Danelljan, Martin and Van Gool, Luc and Timofte, Radu},
  booktitle={IEEE Conference on International Conference on Computer Vision},
  year={2021}
}

License & Acknowledgement

This project is released under the Apache 2.0 license. The codes are based on MMSR, SRFlow, IRN and Glow-pytorch. Please also follow their licenses. Thanks for their great works.

Comments
  • Testing without GT

    Testing without GT

    Is there a way to run the test without GT? I just want to infer the model. I found a mode called LQ which -I think- should only load the images in LR directory. But this mode gives me the error: assert real_crop * self.opt['scale'] * 2 > self.opt['kernel_size'] TypeError: '>' not supported between instances of 'int' and 'NoneType'

    in LQ_dataset.py", line 88

    solved ✅ 
    opened by AhmedHashish123 4
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @JingyunLiang !👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/jingyunliang/hcflow-sr, which currently supports Image Super-Resolution.

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 2
  • The code implementation and the paper description seem different

    The code implementation and the paper description seem different

    Hi, your work is excellent, but there is one thing I don't understand.

    What is written in the paper is:

    "A diagonal covariance matrix with all diagonal elements close to zero"

    But the code implementation in HCFlowNet_SR_arch.py line 64 is: basic. Gaussian diag.logp (LR, - torch. Ones_ like(lr)*6, fake_ lr_ from_ hr)

    why use - torch. Ones_ like(lr)*6 as covariance matrix? This seems to be inconsistent with the description in the paper

    opened by xmyhhh 2
  • environment

    environment

    ImportError: /home/hbw/gcc-build-5.4.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /home/hbw/anaconda3/lib/python3.8/site-packages/scipy/fft/_pocketfft/pypocketfft.cpython-38-x86_64-linux-gnu.so)

    Is this error due to my GCC version being too low, and your version is? looking forward to your reply!

    opened by hbw945 2
  • Code versions of BRISQUE and NIQE used in paper

    Code versions of BRISQUE and NIQE used in paper

    Hi, I have run performance tests with the Matlab versions of the NIQE and BRISQUE codes and found deviations from the values reported in the paper. Could you please provide a link to the code you used? thanks a lot~

    solved ✅ 
    opened by xmyhhh 1
  • Update on Replicate demo

    Update on Replicate demo

    Hello again @JingyunLiang :),

    This pull request does a few little things:

    • Updated the demo link with an icon in README as you suggested
    • A bugfix for cleaning temporary directory on cog

    We have added more functionality to the Example page of your model, now you can add and delete to customise the example gallery as you like (as the owner of the page)

    Also, you could run cog push if you like to update the model of any other models on replicate in the future 😄

    opened by chenxwh 1
  • About training and inference time?

    About training and inference time?

    Thanks for your nice work!

    I want to know how much time do you need to train and inference with your models.

    Furthermore, will information about params / FLOPs be reported?

    Thanks.

    solved ✅ 
    opened by TiankaiHang 1
  • RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    Hi, I've encountered the error when I trained the HCFlowNet. I changed my ".png" dataset to ".pklv4" dataset. I was trained on the platform of windows 10 with 1 single GPU. Could you please help me find the error? Thanks a lot.

    opened by William9Baker 0
  • How to build an invertible mapping between two variables whose dimensions are different ?

    How to build an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build an invertible mapping between them? I notice that you calculate the determinant of the Jacobian, so I thought the mapping here is strictly invertible?

    opened by Wangbk-dl 0
  • How to make an invertible mapping between two variables whose dimensions are different ?

    How to make an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build such an invertible mapping between them ? Take an example: If I have a low-resolution(LR) image x, and I have had an invertible function G. I can feed LR image x into G, and generate an HR image y. But can you ensure that we could obtain an output the same as x when we feed y into G_inverse?

    y = G(x) x' = G_inverse(y) =? x

    I would appreciate it if you could offer some help.

    opened by Wangbk-dl 0
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • Why NLL is negative during the training?

    Why NLL is negative during the training?

    Great work! During the training process, we found that the output NLL is negative. But theoretically, NLL should be positive. Is there any explanation for this?

    opened by IMSEMZPZ 0
Owner
Jingyun Liang
PhD Student at Computer Vision Lab, ETH Zurich
Jingyun Liang
Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).

GD-VCR Code for Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning (EMNLP 2021). Research Questions and Aims: How well can a model perform o

Da Yin 24 Oct 13, 2022
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
The implementation for the SportsCap (IJCV 2021)

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos ProjectPage | Paper | Video | Dataset (Part01

Chen Xin 79 Dec 16, 2022
We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will make a program to Crack Any Password Using Python. Show some ❤️ by starring this repository!

Crack Any Password Using Python We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will

Ananya Chatterjee 11 Dec 03, 2022
Reliable probability face embeddings

ProbFace, arxiv This is a demo code of training and testing [ProbFace] using Tensorflow. ProbFace is a reliable Probabilistic Face Embeddging (PFE) me

Kaen Chan 34 Dec 31, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning

Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning. Circuit Training is an open-s

Google Research 479 Dec 25, 2022
Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Hector Kohler 0 Mar 30, 2022
Editing a Conditional Radiance Field

Editing Conditional Radiance Fields Project | Paper | Video | Demo Editing Conditional Radiance Fields Steven Liu, Xiuming Zhang, Zhoutong Zhang, Rich

Steven Liu 216 Dec 30, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
Retrieve and analysis data from SDSS (Sloan Digital Sky Survey)

Author: Behrouz Safari License: MIT sdss A python package for retrieving and analysing data from SDSS (Sloan Digital Sky Survey) Installation Install

Behrouz 3 Oct 28, 2022
2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案

2020CCF-NER 2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案 bert base + flat + crf + fgm + swa + pu learning策略 + clue数据集 = test1单模0.906 词向量

67 Oct 19, 2022
A simple editor for captions in .SRT file extension

WaySRT A simple editor for captions in .SRT file extension The program doesn't use any external dependecies, just run: python way_srt.py {file_name.sr

Gustavo Lopes 3 Nov 16, 2022
Image-Stitching - Panorama composition using SIFT Features and a custom implementaion of RANSAC algorithm

About The Project Panorama composition using SIFT Features and a custom implementaion of RANSAC algorithm (Random Sample Consensus). Author: Andreas P

Andreas Panayiotou 3 Jan 03, 2023
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
Line-level Handwritten Text Recognition (HTR) system implemented with TensorFlow.

Line-level Handwritten Text Recognition with TensorFlow This model is an extended version of the Simple HTR system implemented by @Harald Scheidl and

Hoàng Tùng Lâm (Linus) 72 May 07, 2022
This code is for eCaReNet: explainable Cancer Relapse Prediction Network.

eCaReNet This code is for eCaReNet: explainable Cancer Relapse Prediction Network. (Towards Explainable End-to-End Prostate Cancer Relapse Prediction

Institute of Medical Systems Biology 2 Jul 28, 2022
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

Eugene Lee 137 Dec 13, 2022
Diverse graph algorithms implemented using JGraphT library.

# 1. Installing Maven & Pandas First, please install Java (JDK11) and Python 3 if they are not already. Next, make sure that Maven (for importing J

See Woo Lee 3 Dec 17, 2022