subpixel: A subpixel convnet for super resolution with Tensorflow

Related tags

Deep Learningsubpixel
Overview

subpixel: A subpixel convolutional neural network implementation with Tensorflow

Left: input images / Right: output images with 4x super-resolution after 6 epochs:

See more examples inside the images folder.

In CVPR 2016 Shi et. al. from Twitter VX (previously Magic Pony) published a paper called Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network [1]. Here we propose a reimplementation of their method and discuss future applications of the technology.

But first let us discuss some background.

Convolutions, transposed convolutions and subpixel convolutions

Convolutional neural networks (CNN) are now standard neural network layers for computer vision. Transposed convolutions (sometimes referred to as deconvolution) are the GRADIENTS of a convolutional layer. Transposed convolutions were, as far as we know first used by Zeiler and Fergus [2] for visualization purposes while improving their AlexNet model.

For visualization purposes let us check out that convolutions in the present subject are a sequence of inner product of a given filter (or kernel) with pieces of a larger image. This operation is highly parallelizable, since the kernel is the same throughout the image. People used to refer to convolutions as locally connected layers with shared parameters. Checkout the figure bellow by Dumoulin and Visin [3]:

source

Note though that convolutional neural networks can be defined with strides or we can follow the convolution with maxpooling to downsample the input image. The equivalent backward operation of a convolution with strides, in other words its gradient, is an upsampling operation, where zeros a filled in between non-zeros pixels followed by a convolution with the kernel rotated 180 degrees. See representation copied from Dumoulin and Visin again:

source

For classification purposes, all that we need is the feedforward pass of a convolutional neural network to extract features at different scales. But for applications such as image super resolution and autoencoders, both downsampling and upsampling operations are necessary in a feedforward pass. The community took inspiration on how the gradients are implemented in CNNs and applied them as a feedforward layer instead.

But as one may have observed the upsampling operation as implemented above with strided convolution gradients adds zero values to the upscale the image, that have to be later filled in with meaningful values. Maybe even worse, these zero values have no gradient information that can be backpropagated through.

To cope with that problem, Shi et. al [1] proposed what we argue to be one the most useful recent convnet tricks (at least in my opinion as a generative model researcher!) They proposed a subpixel convolutional neural network layer for upscaling. This layer essentially uses regular convolutional layers followed by a specific type of image reshaping called a phase shift. In other words, instead of putting zeros in between pixels and having to do extra computation, they calculate more convolutions in lower resolution and resize the resulting map into an upscaled image. This way, no meaningless zeros are necessary. Checkout the figure below from their paper. Follow the colors to have an intuition about how they do the image resizing. Check this paper for further understanding.

source

Next we will discuss our implementation of this method and later what we foresee to be the implications of it everywhere where upscaling in convolutional neural networks was necessary.

Subpixel CNN layer

Following Shi et. al. the equation for implementing the phase shift for CNNs is:

source

In numpy, we can write this as

def PS(I, r):
  assert len(I.shape) == 3
  assert r>0
  r = int(r)
  O = np.zeros((I.shape[0]*r, I.shape[1]*r, I.shape[2]/(r*2)))
  for x in range(O.shape[0]):
    for y in range(O.shape[1]):
      for c in range(O.shape[2]):
        c += 1
        a = np.floor(x/r).astype("int")
        b = np.floor(y/r).astype("int")
        d = c*r*(y%r) + c*(x%r)
        print a, b, d
        O[x, y, c-1] = I[a, b, d]
  return O

To implement this in Tensorflow we would have to create a custom operator and its equivalent gradient. But after staring for a few minutes in the image depiction of the resulting operation we noticed how to write that using just regular reshape, split and concatenate operations. To understand that note that phase shift simply goes through different channels of the output convolutional map and builds up neighborhoods of r x r pixels. And we can do the same with a few lines of Tensorflow code as:

def _phase_shift(I, r):
    # Helper function with main phase shift operation
    bsize, a, b, c = I.get_shape().as_list()
    X = tf.reshape(I, (bsize, a, b, r, r))
    X = tf.transpose(X, (0, 1, 2, 4, 3))  # bsize, a, b, 1, 1
    X = tf.split(1, a, X)  # a, [bsize, b, r, r]
    X = tf.concat(2, [tf.squeeze(x) for x in X])  # bsize, b, a*r, r
    X = tf.split(1, b, X)  # b, [bsize, a*r, r]
    X = tf.concat(2, [tf.squeeze(x) for x in X])  #
    bsize, a*r, b*r
    return tf.reshape(X, (bsize, a*r, b*r, 1))

def PS(X, r, color=False):
  # Main OP that you can arbitrarily use in you tensorflow code
  if color:
    Xc = tf.split(3, 3, X)
    X = tf.concat(3, [_phase_shift(x, r) for x in Xc])
  else:
    X = _phase_shift(X, r)
  return X

The reminder of this library is an implementation of a subpixel CNN using the proposed PS implementation for super resolution of celeb-A image faces. The code was written on top of carpedm20/DCGAN-tensorflow, as so, follow the same instructions to use it:

$ python download.py --dataset celebA  # if this doesn't work, you will have to download the dataset by hand somewhere else
$ python main.py --dataset celebA --is_train True --is_crop True

Subpixel CNN future is bright

Here we want to forecast that subpixel CNNs are going to ultimately replace transposed convolutions (deconv, conv grad, or whatever you call it) in feedforward neural networks. Phase shift's gradient is much more meaningful and resizing operations are virtually free computationally. Our implementation is a high level one, using default Tensorflow OPs. But next we will rewrite everything with Keras so that an even larger community can use it. Plus, a cuda backend level implementation would be even more appreciated.

But for now we want to encourage the community to experiment replacing deconv layers with subpixel operatinos everywhere. By everywhere we mean:

  • Conv-deconv autoencoders
    Similar to super-resolution, include subpixel in other autoencoder implementations, replace deconv layers
  • Style transfer networks
    This didn't work in a lazy plug and play in our experiments. We have to look more carefully
  • Deep Convolutional Autoencoders (DCGAN)
    We started doing this, but as predicted we have to change hyperparameters. The network power is totally different from deconv layers.
  • Segmentation Networks (SegNets)
    ULTRA LOW hanging fruit! This one will be the easiest. Free paper, you're welcome!
  • wherever upscaling is done with zero padding

Join us in the revolution to get rid of meaningless zeros in feedfoward convnets, give suggestions here, try our code!

Sample results

The top row is the input, the middle row is the output, and the bottom row is the ground truth.

by @dribnet

References

[1] Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. By Shi et. al.
[2] Visualizing and Understanding Convolutional Networks. By Zeiler and Fergus.
[3] A guide to convolution arithmetic for deep learning. By Dumoulin and Visin.

Further reading

Alex J. Champandard made a really interesting analysis of this topic in this thread.
For discussions about differences between phase shift and straight up resize please see the companion notebook and this thread.

Owner
Atrium LTS
Atrium LTS
It is a system used to detect bone fractures. using techniques deep learning and image processing

MohammedHussiengadalla-Intelligent-Classification-System-for-Bone-Fractures It is a system used to detect bone fractures. using techniques deep learni

Mohammed Hussien 7 Nov 11, 2022
This is an unofficial PyTorch implementation of Meta Pseudo Labels

This is an unofficial PyTorch implementation of Meta Pseudo Labels. The official Tensorflow implementation is here.

Jungdae Kim 320 Jan 08, 2023
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
Code for ICML 2021 paper: How could Neural Networks understand Programs?

OSCAR This repository contains the source code of our ICML 2021 paper How could Neural Networks understand Programs?. Environment Run following comman

Dinglan Peng 115 Dec 17, 2022
DeepStochlog Package For Python

DeepStochLog Installation Installing SWI Prolog DeepStochLog requires SWI Prolog to run. Run the following commands to install: sudo apt-add-repositor

KU Leuven Machine Learning Research Group 17 Dec 23, 2022
Scalable Graph Neural Networks for Heterogeneous Graphs

Neighbor Averaging over Relation Subgraphs (NARS) NARS is an algorithm for node classification on heterogeneous graphs, based on scalable neighbor ave

Facebook Research 67 Dec 03, 2022
App customer segmentation cohort rfm clustering

CUSTOMER SEGMENTATION COHORT RFM CLUSTERING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU Nên chuyển qua theme màu dark thì sẽ nhìn đẹp hơn https://customer-segmentat

hieulmsc 3 Dec 18, 2021
git《Joint Entity and Relation Extraction with Set Prediction Networks》(2020) GitHub:

Joint Entity and Relation Extraction with Set Prediction Networks Source code for Joint Entity and Relation Extraction with Set Prediction Networks. W

130 Dec 13, 2022
Determined: Deep Learning Training Platform

Determined: Deep Learning Training Platform Determined is an open-source deep learning training platform that makes building models fast and easy. Det

Determined AI 2k Dec 31, 2022
Monitor your ML jobs on mobile devices📱, especially for Google Colab / Kaggle

TF Watcher TF Watcher is a simple to use Python package and web app which allows you to monitor 👀 your Machine Learning training or testing process o

Rishit Dagli 54 Nov 01, 2022
Code for Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid

SPN: Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid Code for Fully Context-Aware Image Inpainting with a Learned Semantic Pyrami

12 Jun 27, 2022
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
MAVE: : A Product Dataset for Multi-source Attribute Value Extraction

MAVE: : A Product Dataset for Multi-source Attribute Value Extraction The dataset contains 3 million attribute-value annotations across 1257 unique ca

Google Research Datasets 89 Jan 08, 2023
Confident Semantic Ranking Loss for Part Parsing

Confident Semantic Ranking Loss for Part Parsing

Jiachen Xu 5 Oct 22, 2022
Neural Koopman Lyapunov Control

Neural-Koopman-Lyapunov-Control Code for our paper: Neural Koopman Lyapunov Control Requirements dReal4: v4.19.02.1 PyTorch: 1.2.0 The learning framew

Vrushabh Zinage 6 Dec 24, 2022
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 05, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving (ICCV 2021)

Exploring Simple 3D Multi-Object Tracking for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Exploring Simple 3D Multi-Object Tracking for

QCraft 141 Nov 21, 2022
An off-line judger supporting distributed problem repositories

Thaw 中文 | English Thaw is an off-line judger supporting distributed problem repositories. Everyone can use Thaw release problems with license on GitHu

countercurrent_time 2 Jan 09, 2022