Open Source Light Field Toolbox for Super-Resolution

Overview

BasicLFSR

BasicLFSR is an open-source and easy-to-use Light Field (LF) image Super-Ressolution (SR) toolbox based on PyTorch, including a collection of papers on LF image SR and a benchmark to comprehensively evaluate the performance of existing methods. We also provided simple pipelines to train/valid/test state-of-the-art methods to get started quickly, and you can transform your methods into the benchmark.

Note: This repository will be updated on a regular basis, and the pretrained models of existing methods will be open-sourced one after another. So stay tuned!

Methods

Methods Paper Repository
LFSSR Light Field Spatial Super-Resolution Using Deep Efficient Spatial-Angular Separable Convolution. TIP2018 spatialsr/
DeepLightFieldSSR
resLF Residual Networks for Light Field Image Super-Resolution. CVPR2019 shuozh/resLF
HDDRNet High-Dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction. TPAMI2019 monaen/
LightFieldReconstruction
LF-InterNet Spatial-Angular Interaction for Light Field Image Super-Resolution. ECCV2019 YingqianWang/
LF-InterNet
LFSSR-ATO Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization. CVPR2020 jingjin25/
LFSSR-ATO
LF-DFnet Light field image super-resolution using deformable convolution. TIP2020 YingqianWang/
LF-DFnet
MEG-Net End-to-End Light Field Spatial Super-Resolution Network using Multiple Epipolar Geometry. TIP2021 shuozh/MEG-Net

Datasets

We used the EPFL, HCInew, HCIold, INRIA and STFgantry datasets for both training and test. Please first download our datasets via Baidu Drive (key:7nzy) or OneDrive, and place the 5 datasets to the folder ./datasets/.

  • After downloading, you should find following structure:

    ├──./datasets/
    │    ├── EPFL
    │    │    ├── training
    │    │    │    ├── Bench_in_Paris.mat
    │    │    │    ├── Billboards.mat
    │    │    │    ├── ...
    │    │    ├── test
    │    │    │    ├── Bikes.mat
    │    │    │    ├── Books__Decoded.mat
    │    │    │    ├── ...
    │    ├── HCI_new
    │    ├── ...
    
  • Run Generate_Data_for_Training.m to generate training data. The generated data will be saved in ./data_for_train/ (SR_5x5_2x, SR_5x5_4x).

  • Run Generate_Data_for_Test.m to generate test data. The generated data will be saved in ./data_for_test/ (SR_5x5_2x, SR_5x5_4x).

Benchmark

We benchmark several methods on above datasets, and PSNR and SSIM metrics are used for quantitative evaluation.

PSNR and SSIM values achieved by different methods for 2xSR:

Method Scale #Params. EPFL HCInew HCIold INRIA STFgantry Average
Bilinear x2 -- 28.479949/0.918006 30.717944/0.919248 36.243278/0.970928 30.133901/0.945545 29.577468/0.931030 31.030508/0.936951
Bicubic x2 -- 29.739509/0.937581 31.887011/0.935637 37.685776/0.978536 31.331483/0.957731 31.062631/0.949769 32.341282/0.951851
VDSR x2
EDSR x2 33.088922/0.962924 34.828374/0.959156 41.013989/0.987400 34.984982/0.976397 36.295865/0.981809
RCSN x2
resLF x2
LFSSR x2 33.670594/0.974351 36.801555/0.974910 43.811050/0.993773 35.279443/0.983202 37.943969/0.989818
LF-ATO x2 34.271635/0.975711 37.243620/0.976684 44.205264/0.994202 36.169943/0.984241 39.636445/0.992862
LF-InterNet x2
LF-DFnet x2
MEG-Net x2
LFT x2

PSNR and SSIM values achieved by different methods for 4xSR:

Method Scale #Params. EPFL HCInew HCIold INRIA STFgantry Average
Bilinear x4 -- 24.567490/0.815793 27.084949/0.839677 31.688225/0.925630 26.226265/0.875682 25.203262/0.826105 26.954038/0.856577
Bicubic x4 -- 25.264206/0.832389 27.714905/0.851661 32.576315/0.934428 26.951718/0.886740 26.087451/0.845230 27.718919/0.870090
VDSR x4
EDSR x4
RCSN x4
resLF x4
LFSSR x4
LF-ATO x4
LF-InterNet x4
LF-DFnet x4
MEG-Net x4
LFT x4

Train

  • Run train.py to perform network training. Example for training [model_name] on 5x5 angular resolution for 2x/4x SR:
    $ python train.py --model_name [model_name] --angRes 5 --scale_factor 2 --batch_size 8
    $ python train.py --model_name [model_name] --angRes 5 --scale_factor 4 --batch_size 4
    
  • Checkpoints and Logs will be saved to ./log/, and the ./log/ has following structure:
    ├──./log/
    │    ├── SR_5x5_2x
    │    │    ├── [dataset_name]
    │    │         ├── [model_name]
    │    │         │    ├── [model_name]_log.txt
    │    │         │    ├── checkpoints
    │    │         │    │    ├── [model_name]_5x5_2x_epoch_01_model.pth
    │    │         │    │    ├── [model_name]_5x5_2x_epoch_02_model.pth
    │    │         │    │    ├── ...
    │    │         │    ├── results
    │    │         │    │    ├── VAL_epoch_01
    │    │         │    │    ├── VAL_epoch_02
    │    │         │    │    ├── ...
    │    │         ├── [other_model_name]
    │    │         ├── ...
    │    ├── SR_5x5_4x
    

Test

  • Run test.py to perform network inference. Example for test [model_name] on 5x5 angular resolution for 2x/4xSR:

    $ python test.py --model_name [model_name] --angRes 5 --scale_factor 2  
    $ python test.py --model_name [model_name] --angRes 5 --scale_factor 4 
    
  • The PSNR and SSIM values of each dataset will be saved to ./log/, and the ./log/ is following structure:

    ├──./log/
    │    ├── SR_5x5_2x
    │    │    ├── [dataset_name]
    │    │        ├── [model_name]
    │    │        │    ├── [model_name]_log.txt
    │    │        │    ├── checkpoints
    │    │        │    │   ├── ...
    │    │        │    ├── results
    │    │        │    │    ├── Test
    │    │        │    │    │    ├── evaluation.xls
    │    │        │    │    │    ├── [dataset_1_name]
    │    │        │    │    │    │    ├── [scene_1_name]
    │    │        │    │    │    │    │    ├── [scene_1_name]_CenterView.bmp
    │    │        │    │    │    │    │    ├── [scene_1_name]_SAI.bmp
    │    │        │    │    │    │    │    ├── views
    │    │        │    │    │    │    │    │    ├── [scene_1_name]_0_0.bmp
    │    │        │    │    │    │    │    │    ├── [scene_1_name]_0_1.bmp
    │    │        │    │    │    │    │    │    ├── ...
    │    │        │    │    │    │    │    │    ├── [scene_1_name]_4_4.bmp
    │    │        │    │    │    │    ├── [scene_2_name]
    │    │        │    │    │    │    ├── ...
    │    │        │    │    │    ├── [dataset_2_name]
    │    │        │    │    │    ├── ...
    │    │        │    │    ├── VAL_epoch_01
    │    │        │    │    ├── ...
    │    │        ├── [other_model_name]
    │    │        ├── ...
    │    ├── SR_5x5_4x
    

Recources

We provide some original super-resolved images and useful resources to facilitate researchers to reproduce the above results.

Other Recources

Contact

Any question regarding this work can be addressed to [email protected].

Owner
Squidward
Squidward
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022
Weakly Supervised Segmentation by Tensorflow.

Weakly Supervised Segmentation by Tensorflow. Implements semantic segmentation in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

CHENG-YOU LU 52 Dec 27, 2022
implicit displacement field

Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields [project page][paper][cite] Geometry-Consistent Neural Shape Represe

Yifan Wang 100 Dec 19, 2022
Code release for NeuS

NeuS We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inpu

Peng Wang 813 Jan 04, 2023
PN-Net a neural field-based framework for depth estimation from single-view RGB images.

PN-Net We present a neural field-based framework for depth estimation from single-view RGB images. Rather than representing a 2D depth map as a single

1 Oct 02, 2021
KaziText is a tool for modelling common human errors.

KaziText KaziText is a tool for modelling common human errors. It estimates probabilities of individual error types (so called aspects) from grammatic

ÚFAL 3 Nov 24, 2022
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

35 Nov 16, 2022
A lightweight python AUTOmatic-arRAY library.

A lightweight python AUTOmatic-arRAY library. Write numeric code that works for: numpy cupy dask autograd jax mars tensorflow pytorch ... and indeed a

Johnnie Gray 62 Dec 27, 2022
Implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs".

PPO-BiHyb This is the official implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Grap

<a href=[email protected]"> 66 Nov 23, 2022
A TensorFlow implementation of FCN-8s

FCN-8s implementation in TensorFlow Contents Overview Examples and demo video Dependencies How to use it Download pre-trained VGG-16 Overview This is

Pierluigi Ferrari 50 Aug 08, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 07, 2023
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Barlow Twins and HSIC

Barlow Twins and HSIC Unofficial Pytorch implementation for Barlow Twins and HSIC_SSL on small datasets (CIFAR10, STL10, and Tiny ImageNet). Correspon

Yao-Hung Hubert Tsai 49 Nov 24, 2022
Quantify the difference between two arbitrary curves in space

similaritymeasures Quantify the difference between two arbitrary curves Curves in this case are: discretized by inidviudal data points ordered from a

Charles Jekel 175 Jan 08, 2023
Privacy-Preserving Machine Learning (PPML) Tutorial Presented at PyConDE 2022

PPML: Machine Learning on Data you cannot see Repository for the tutorial on Privacy-Preserving Machine Learning (PPML) presented at PyConDE 2022 Abst

Valerio Maggio 10 Aug 16, 2022
The repository includes the code for training cell counting applications. (Keras + Tensorflow)

cell_counting_v2 The repository includes the code for training cell counting applications. (Keras + Tensorflow) Dataset can be downloaded here : http:

Weidi 113 Oct 06, 2022
A time series processing library

Timeseria Timeseria is a time series processing library which aims at making it easy to handle time series data and to build statistical and machine l

Stefano Alberto Russo 11 Aug 08, 2022
ICCV2021 Expert-Goal Trajectory Prediction

ICCV 2021: Where are you heading? Dynamic Trajectory Prediction with Expert Goal Examples This repository contains the code for the paper Where are yo

hz 21 Dec 12, 2022
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

Lue Tao 29 Sep 20, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022