Equivariant Imaging: Learning Beyond the Range Space

Related tags

Deep LearningEI
Overview

Equivariant Imaging: Learning Beyond the Range Space

arXiv GitHub Stars

Equivariant Imaging: Learning Beyond the Range Space

Dongdong Chen, Julián Tachella, Mike E. Davies.

The University of Edinburgh

In ICCV 2021 (oral)

flexible flexible Figure: Learning to image from only measurements. Training an imaging network through just measurement consistency (MC) does not significantly improve the reconstruction over the simple pseudo-inverse (). However, by enforcing invariance in the reconstructed image set, equivariant imaging (EI) performs almost as well as a fully supervised network. Top: sparse view CT reconstruction, Bottom: pixel inpainting. PSNR is shown in top right corner of the images.

EI is a new self-supervised, end-to-end and physics-based learning framework for inverse problems with theoretical guarantees which leverages simple but fundamental priors about natural signals: symmetry and low-dimensionality.

Get quickly started

  • Please find the blog post for a quick introduction of EI.
  • Please find the core implementation of EI at './ei/closure/ei.py' (ei.py).
  • Please find the 30 lines code get_started.py and the toy cs example to get started with EI.

Overview

The problem: Imaging systems capture noisy measurements of a signal through a linear operator + . We aim to learn the reconstruction function where

  • NO groundtruth data for training as most inverse problems don’t have ground-truth;
  • only a single forward operator is available;
  • has a non-trivial nullspace (e.g. ).

The challenge:

  • We have NO information about the signal set outside the range space of or .
  • It is IMPOSSIBLE to learn the signal set using alone.

The motivation:

We assume the signal set has a low-dimensional structure and is invariant to a groups of transformations (orthgonal matrix, e.g. shift, rotation, scaling, reflection, etc.) related to a group , such that and the sets and are the same. For example,

  • natural images are shift invariant.
  • in CT/MRI data, organs can be imaged at different angles making the problem invariant to rotation.

Key observations:

  • Invariance provides access to implicit operators with potentially different range spaces: where and . Obviously, should also in the signal set.
  • The composition is equivariant to the group of transformations : .

overview Figure: Learning with and without equivariance in a toy 1D signal inpainting task. The signal set consists of different scaling of a triangular signal. On the left, the dataset does not enjoy any invariance, and hence it is not possible to learn the data distribution in the nullspace of . In this case, the network can inpaint the signal in an arbitrary way (in green), while achieving zero data consistency loss. On the right, the dataset is shift invariant. The range space of is shifted via the transformations , and the network inpaints the signal correctly.

Equivariant Imaging: to learn by using only measurements , all you need is to:

  • Define:
  1. define a transformation group based on the certain invariances to the signal set.
  2. define a neural reconstruction function , e.g. where is the (approximated) pseudo-inverse of and is a UNet-like neural net.
  • Calculate:
  1. calculate as the estimation of .
  2. calculate by transforming .
  3. calculate by reconstructing from its measurement .

flowchart

  • Train: finally learn the reconstruction function by solving: +

Requirements

All used packages are listed in the Anaconda environment.yml file. You can create an environment and run

conda env create -f environment.yml

Test

We provide the trained models used in the paper which can be downloaded at Google Drive. Please put the downloaded folder 'ckp' in the root path. Then evaluate the trained models by running

python3 demo_test_inpainting.py

and

python3 demo_test_ct.py

Train

To train EI for a given inverse problem (inpainting or CT), run

python3 demo_train.py --task 'inpainting'

or run a bash script to train the models for both CT and inpainting tasks.

bash train_paper_bash.sh

Train your models

To train your EI models on your dataset for a specific inverse problem (e.g. inpainting), run

python3 demo_train.py --h
  • Note: you may have to implement the forward model (physics) if you manage to solve a new inverse problem.
  • Note: you only need to specify some basic settings (e.g. the path of your training set).

Citation

@inproceedings{chen2021equivariant,
title = {Equivariant Imaging: Learning Beyond the Range Space},
	author={Chen, Dongdong and Tachella, Juli{\'a}n and Davies, Mike E},
	booktitle={Proceedings of the International Conference on Computer Vision (ICCV)},
	year = {2021}
}
Owner
Dongdong Chen
Machine learning, Inverse problems
Dongdong Chen
DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos.

DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos A concise deep reinforcement learning libr

329 Jan 03, 2023
Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images This repository contains the implementation of the following paper

Seonggwan Ko 9 Jul 30, 2022
Crowd-Kit is a powerful Python library that implements commonly-used aggregation methods for crowdsourced annotation and offers the relevant metrics and datasets

Crowd-Kit: Computational Quality Control for Crowdsourcing Documentation Crowd-Kit is a powerful Python library that implements commonly-used aggregat

Toloka 125 Dec 30, 2022
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

Keras 57k Jan 09, 2023
StarGAN2 for practice

StarGAN2 for practice This version of StarGAN2 (coined as 'Post-modern Style Transfer') is intended mostly for fellow artists, who rarely look at scie

vadim epstein 87 Sep 24, 2022
Perform Linear Classification with Multi-way Data

MultiwayClassification This is an R package to perform linear classification for data with multi-way structure. The distance-weighted discrimination (

Eric F. Lock 2 Dec 15, 2020
Imaging, analysis, and simulation software for radio interferometry

ehtim (eht-imaging) Python modules for simulating and manipulating VLBI data and producing images with regularized maximum likelihood methods. This ve

Andrew Chael 5.2k Dec 28, 2022
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
This is the official Pytorch implementation of the paper "Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model"

Diverse Motion Stylization (Official) This is the official Pytorch implementation of this paper. Diverse Motion Stylization for Multiple Style Domains

Soomin Park 28 Dec 16, 2022
Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python

FlappyAI Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python Everything Used Genetic Algorithm especially NEAT conce

Eryawan Presma Y. 2 Mar 24, 2022
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is

Bhavishya Pandit 3 Sep 30, 2022
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
Stochastic Normalizing Flows

Stochastic Normalizing Flows We introduce stochasticity in Boltzmann-generating flows. Normalizing flows are exact-probability generative models that

AI4Science group, FU Berlin (Frank Noé and co-workers) 50 Dec 16, 2022
Dataset and Code for ICCV 2021 paper "Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme"

Dataset and Code for RealVSR Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme Xi Yang, Wangmeng Xiang,

Xi Yang 92 Jan 04, 2023
Remote sensing change detection tool based on PaddlePaddle

PdRSCD PdRSCD(PaddlePaddle Remote Sensing Change Detection)是一个基于飞桨PaddlePaddle的遥感变化检测的项目,pypi包名为ppcd。目前0.2版本,最新支持图像列表输入的训练和预测,如多期影像、多源影像甚至多期多源影像。可以快速完

38 Aug 31, 2022
Implementation of paper "Graph Condensation for Graph Neural Networks"

GCond A PyTorch implementation of paper "Graph Condensation for Graph Neural Networks" Code will be released soon. Stay tuned :) Abstract We propose a

Wei Jin 66 Dec 04, 2022
Repository for XLM-T, a framework for evaluating multilingual language models on Twitter data

This is the XLM-T repository, which includes data, code and pre-trained multilingual language models for Twitter. XLM-T - A Multilingual Language Mode

Cardiff NLP 112 Dec 27, 2022
Neural HMMs are all you need (for high-quality attention-free TTS)

Neural HMMs are all you need (for high-quality attention-free TTS) Shivam Mehta, Éva Székely, Jonas Beskow, and Gustav Eje Henter This is the official

Shivam Mehta 0 Oct 28, 2022