Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Overview

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

This repository is the official PyTorch implementation of Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (arxiv, supp).

🚀 🚀 🚀 News:


Normalizing flows have recently demonstrated promising results for low-level vision tasks. For image super-resolution (SR), it learns to predict diverse photo-realistic high-resolution (HR) images from the low-resolution (LR) image rather than learning a deterministic mapping. For image rescaling, it achieves high accuracy by jointly modelling the downscaling and upscaling processes. While existing approaches employ specialized techniques for these two tasks, we set out to unify them in a single formulation. In this paper, we propose the hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling. More specifically, HCFlow learns a bijective mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously. In particular, the high-frequency component is conditional on the LR image in a hierarchical manner. To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training. Extensive experiments on general image SR, face image SR and image rescaling have demonstrated that the proposed HCFlow achieves state-of-the-art performance in terms of both quantitative metrics and visual quality.

         

Requirements

  • Python 3.7, PyTorch == 1.7.1
  • Requirements: opencv-python, lpips, natsort, etc.
  • Platforms: Ubuntu 16.04, cuda-11.0
cd HCFlow-master
pip install -r requirements.txt 

Quick Run (takes 1 Minute)

To run the code with one command (without preparing data), run this command:

cd codes
# face image SR
python test_HCFLow.py --opt options/test/test_SR_CelebA_8X_HCFlow.yml

# general image SR
python test_HCFLow.py --opt options/test/test_SR_DF2K_4X_HCFlow.yml

# image rescaling
python test_HCFLow.py --opt options/test/test_Rescaling_DF2K_4X_HCFlow.yml

Data Preparation

The framework of this project is based on MMSR and SRFlow. To prepare data, put training and testing sets in ./datasets as ./datasets/DIV2K/HR/0801.png. Commonly used SR datasets can be downloaded here. There are two ways for accerleration in data loading: First, one can use ./scripts/png2npy.py to generate .npy files and use data/GTLQnpy_dataset.py. Second, one can use .pklv4 dataset (recommended) and use data/LRHR_PKL_dataset.py. Please refer to SRFlow for more details. Prepared datasets can be downloaded here.

Training

To train HCFlow for general image SR/ face image SR/ image rescaling, run this command:

cd codes

# face image SR
python train_HCFLow.py --opt options/train/train_SR_CelebA_8X_HCFlow.yml

# general image SR
python train_HCFLow.py --opt options/train/train_SR_DF2K_4X_HCFlow.yml

# image rescaling
python train_HCFLow.py --opt options/train/train_Rescaling_DF2K_4X_HCFlow.yml

All trained models can be downloaded from here.

Testing

Please follow the Quick Run section. Just modify the dataset path in test_HCFlow_*.yml.

Results

We achieved state-of-the-art performance on general image SR, face image SR and image rescaling.

For more results, please refer to the paper and supp for details.

Citation

@inproceedings{liang21hcflow,
  title={Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling},
  author={Liang, Jingyun and Lugmayr, Andreas and Zhang, Kai and Danelljan, Martin and Van Gool, Luc and Timofte, Radu},
  booktitle={IEEE Conference on International Conference on Computer Vision},
  year={2021}
}

License & Acknowledgement

This project is released under the Apache 2.0 license. The codes are based on MMSR, SRFlow, IRN and Glow-pytorch. Please also follow their licenses. Thanks for their great works.

Comments
  • Testing without GT

    Testing without GT

    Is there a way to run the test without GT? I just want to infer the model. I found a mode called LQ which -I think- should only load the images in LR directory. But this mode gives me the error: assert real_crop * self.opt['scale'] * 2 > self.opt['kernel_size'] TypeError: '>' not supported between instances of 'int' and 'NoneType'

    in LQ_dataset.py", line 88

    solved ✅ 
    opened by AhmedHashish123 4
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @JingyunLiang !👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/jingyunliang/hcflow-sr, which currently supports Image Super-Resolution.

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 2
  • The code implementation and the paper description seem different

    The code implementation and the paper description seem different

    Hi, your work is excellent, but there is one thing I don't understand.

    What is written in the paper is:

    "A diagonal covariance matrix with all diagonal elements close to zero"

    But the code implementation in HCFlowNet_SR_arch.py line 64 is: basic. Gaussian diag.logp (LR, - torch. Ones_ like(lr)*6, fake_ lr_ from_ hr)

    why use - torch. Ones_ like(lr)*6 as covariance matrix? This seems to be inconsistent with the description in the paper

    opened by xmyhhh 2
  • environment

    environment

    ImportError: /home/hbw/gcc-build-5.4.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /home/hbw/anaconda3/lib/python3.8/site-packages/scipy/fft/_pocketfft/pypocketfft.cpython-38-x86_64-linux-gnu.so)

    Is this error due to my GCC version being too low, and your version is? looking forward to your reply!

    opened by hbw945 2
  • Code versions of BRISQUE and NIQE used in paper

    Code versions of BRISQUE and NIQE used in paper

    Hi, I have run performance tests with the Matlab versions of the NIQE and BRISQUE codes and found deviations from the values reported in the paper. Could you please provide a link to the code you used? thanks a lot~

    solved ✅ 
    opened by xmyhhh 1
  • Update on Replicate demo

    Update on Replicate demo

    Hello again @JingyunLiang :),

    This pull request does a few little things:

    • Updated the demo link with an icon in README as you suggested
    • A bugfix for cleaning temporary directory on cog

    We have added more functionality to the Example page of your model, now you can add and delete to customise the example gallery as you like (as the owner of the page)

    Also, you could run cog push if you like to update the model of any other models on replicate in the future 😄

    opened by chenxwh 1
  • About training and inference time?

    About training and inference time?

    Thanks for your nice work!

    I want to know how much time do you need to train and inference with your models.

    Furthermore, will information about params / FLOPs be reported?

    Thanks.

    solved ✅ 
    opened by TiankaiHang 1
  • RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    Hi, I've encountered the error when I trained the HCFlowNet. I changed my ".png" dataset to ".pklv4" dataset. I was trained on the platform of windows 10 with 1 single GPU. Could you please help me find the error? Thanks a lot.

    opened by William9Baker 0
  • How to build an invertible mapping between two variables whose dimensions are different ?

    How to build an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build an invertible mapping between them? I notice that you calculate the determinant of the Jacobian, so I thought the mapping here is strictly invertible?

    opened by Wangbk-dl 0
  • How to make an invertible mapping between two variables whose dimensions are different ?

    How to make an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build such an invertible mapping between them ? Take an example: If I have a low-resolution(LR) image x, and I have had an invertible function G. I can feed LR image x into G, and generate an HR image y. But can you ensure that we could obtain an output the same as x when we feed y into G_inverse?

    y = G(x) x' = G_inverse(y) =? x

    I would appreciate it if you could offer some help.

    opened by Wangbk-dl 0
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • Why NLL is negative during the training?

    Why NLL is negative during the training?

    Great work! During the training process, we found that the output NLL is negative. But theoretically, NLL should be positive. Is there any explanation for this?

    opened by IMSEMZPZ 0
Owner
Jingyun Liang
PhD Student at Computer Vision Lab, ETH Zurich
Jingyun Liang
Robocop is your personal mini voice assistant made using Python.

Robocop-VoiceAssistant To use this project, you should have python installed in your system. If you don't have python installed, install it beforehand

Sohil Khanduja 3 Feb 26, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

246 Dec 11, 2022
SLAMP: Stochastic Latent Appearance and Motion Prediction

SLAMP: Stochastic Latent Appearance and Motion Prediction Official implementation of the paper SLAMP: Stochastic Latent Appearance and Motion Predicti

Kaan Akan 34 Dec 08, 2022
Code base for NeurIPS 2021 publication titled Kernel Functional Optimisation (KFO)

KernelFunctionalOptimisation Code base for NeurIPS 2021 publication titled Kernel Functional Optimisation (KFO) We have conducted all our experiments

2 Jun 29, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

10 Mar 16, 2022
PyTorch and Tensorflow functional model definitions

functional-zoo Model definitions and pretrained weights for PyTorch and Tensorflow PyTorch, unlike lua torch, has autograd in it's core, so using modu

Sergey Zagoruyko 590 Dec 22, 2022
The source code for 'Noisy-Labeled NER with Confidence Estimation' accepted by NAACL 2021

Kun Liu*, Yao Fu*, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, Sheng Gao. Noisy-Labeled NER with Confidence Estimation. NAACL 2021. [arxiv]

30 Nov 12, 2022
A package for "Procedural Content Generation via Reinforcement Learning" OpenAI Gym interface.

Readme: Illuminating Diverse Neural Cellular Automata for Level Generation This is the codebase used to generate the results presented in the paper av

Sam Earle 27 Jan 05, 2023
Simple transformer model for CIFAR10

CIFAR-Transformer Simple transformer model for CIFAR10. Reference: https://www.tensorflow.org/text/tutorials/transformer https://github.com/huggingfac

9 Nov 07, 2022
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Alibaba 185 Dec 21, 2022
Research code of ICCV 2021 paper "Mesh Graphormer"

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
Einshape: DSL-based reshaping library for JAX and other frameworks.

Einshape: DSL-based reshaping library for JAX and other frameworks. The jnp.einsum op provides a DSL-based unified interface to matmul and tensordot o

DeepMind 62 Nov 30, 2022
Using pretrained language models for biomedical knowledge graph completion.

LMs for biomedical KG completion This repository contains code to run the experiments described in: Scientific Language Models for Biomedical Knowledg

Rahul Nadkarni 41 Nov 30, 2022
Ensembling Off-the-shelf Models for GAN Training

Data-Efficient GANs with DiffAugment project | paper | datasets | video | slides Generated using only 100 images of Obama, grumpy cats, pandas, the Br

MIT HAN Lab 1.2k Dec 26, 2022
E2e music remastering system - End-to-end Music Remastering System Using Self-supervised and Adversarial Training

End-to-end Music Remastering System This repository includes source code and pre

Junghyun (Tony) Koo 37 Dec 15, 2022
Repository for paper "Non-intrusive speech intelligibility prediction from discrete latent representations"

Non-Intrusive Speech Intelligibility Prediction from Discrete Latent Representations Official repository for paper "Non-Intrusive Speech Intelligibili

Alex McKinney 5 Oct 25, 2022