A framework for joint super-resolution and image synthesis, without requiring real training data

Related tags

Deep LearningSynthSR
Overview

SynthSR

This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The method can also be configured to achieve denoising and bias field correction.

The network takes synthetic scans generated on the fly as inputs, and can be trained to regress either real or synthetic target scans. The synthetic scans are obtained by sampling a generative model building on the SynthSeg [1] package, which we really encourage you to have a look at!


In short, synthetic scans are generated at each mini-batch by: 1) randomly selecting a label map among of pool of training segmentations, 2) spatially deforming it in 3D, 3) sampling a Gaussian Mixture Model (GMM) conditioned on the deformed label map (see Figure 1 below), and 4) corrupting with a random bias field. This gives us a synthetic scan at high resolution (HR). We then simulate thick slice spacing by blurring and downsampling it to low resolution (LR). In SR, we then train a network to learn the mapping between LR data (possibly multimodal, hence the joint synthesis) and HR synthetic scans. Moreover If real images are available along with the training label maps, we can learn to regress the real images instead.


Training overview Figure 1: overview of SynthSR


Tutorials for Generation and Training

This repository contains code to train your own network for SR or joint SR and synthesis. Because the training function has a lot of options, we provide here some tutorials to familiarise yourself with the different training/generation parameters. We emphasise that we provide example training data along with these scripts: 5 preprocessed publicly available T1 scans at 1mm isotropic resolution [2] with corresponding label maps obtained with FreeSurfer [3]. The tutorials can be found in scripts, and they include:

  • Six generation scripts corresponding to different use cases (see Figure 2 below). We recommend to go through them all, (even if you're only interested in case 1), since we successively introduce different functionalities as we go through.

  • One training script, explaining the main training parameters.

  • One script explaining how to estimate the parameters governing the GMM, in case you wish to train a model on your own data.


Training overview Figure 2: Examples generated by running the tutorials on the provided data [2]. For each use case, we show the synhtetic images used as inputs to the network, as well as the regression target.


Content

  • SynthSR: this is the main folder containing the generative model and training function:

    • labels_to_image_model.py: builds the generative model.

    • brain_generator.py: contains the class BrainGenerator, which is a wrapper around the model. New images can simply be generated by instantiating an object of this class, and calling the method generate_image().

    • model_inputs.py: prepares the inputs of the generative model.

    • training.py: contains the function to train the network. All training parameters are explained there.

    • metrics_model.py: contains a Keras model that implements diffrent loss functions.

    • estimate_priors.py: contains functions to estimate the prior distributions of the GMM parameters.

  • data: this folder contains the data for the tutorials (T1 scans [2], corresponding FreeSurfer segmentations and some other useful files)

  • script: additionally to the tutorials, we also provide a script to launch trainings from the terminal

  • ext: contains external packages.


Requirements

This code relies on several external packages (already included in \ext):

  • lab2im: contains functions for data augmentation, and a simple version of the generative model, on which we build to build label_to_image_model [1]

  • neuron: contains functions for deforming, and resizing tensors, as well as functions to build the segmentation network [4,5].

  • pytool-lib: library required by the neuron package.

All the other requirements are listed in requirements.txt. We list here the most important dependencies:

  • tensorflow-gpu 2.0
  • tensorflow_probability 0.8
  • keras > 2.0
  • cuda 10.0 (required by tensorflow)
  • cudnn 7.0
  • nibabel
  • numpy, scipy, sklearn, tqdm, pillow, matplotlib, ipython, ...

Citation/Contact

This repository contains the code related to a submission that is still under review.

If you have any question regarding the usage of this code, or any suggestions to improve it you can contact us at:
[email protected]


References

[1] A Learning Strategy for Contrast-agnostic MRI Segmentation
Benjamin Billot, Douglas N. Greve, Koen Van Leemput, Bruce Fischl, Juan Eugenio Iglesias*, Adrian V. Dalca*
*contributed equally
MIDL 2020

[2] A novel in vivo atlas of human hippocampal subfields usinghigh-resolution 3 T magnetic resonance imaging
J. Winterburn, J. Pruessner, S. Chavez, M. Schira, N. Lobaugh, A. Voineskos, M. Chakravarty
NeuroImage (2013)

[3] FreeSurfer
Bruce Fischl
NeuroImage (2012)

[4] Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
CVPR 2018

[5] Unsupervised Data Imputation via Variational Inference of Deep Subspaces
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
Arxiv preprint (2019)

DLWP: Deep Learning Weather Prediction

DLWP: Deep Learning Weather Prediction DLWP is a Python project containing data-

Kushal Shingote 3 Aug 14, 2022
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch

Learning to Communicate with Deep Multi-Agent Reinforcement Learning This is a PyTorch implementation of the original Lua code release. Overview This

Minqi 297 Dec 12, 2022
🎯 A comprehensive gradient-free optimization framework written in Python

Solid is a Python framework for gradient-free optimization. It contains basic versions of many of the most common optimization algorithms that do not

Devin Soni 565 Dec 26, 2022
This is the second place solution for : UmojaHack Africa 2022: African Snake Antivenom Binding Challenge

UmojaHack-Africa-2022-African-Snake-Antivenom-Binding-Challenge This is the second place solution for : UmojaHack Africa 2022: African Snake Antivenom

Mami Mokhtar 10 Dec 03, 2022
Official code base for the poster "On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation" published in NeurIPS 2021 Workshop (SVRHM)

Self-Supervised Learning (SimCLR) with Biological Plausible Image Augmentations Official code base for the poster "On the use of Cortical Magnificatio

Binxu 8 Aug 17, 2022
RuleBERT: Teaching Soft Rules to Pre-Trained Language Models

RuleBERT: Teaching Soft Rules to Pre-Trained Language Models (Paper) (Slides) (Video) RuleBERT is a pre-trained language model that has been fine-tune

16 Aug 24, 2022
a baseline to practice

ccks2021_track3_baseline a baseline to practice 路径可能会有问题,自己改改 torch==1.7.1 pyhton==3.7.1 transformers==4.7.0 cuda==11.0 this is a baseline, you can fi

45 Nov 23, 2022
CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation

CDGAN CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation CDGAN Implementation in PyTorch This is the imple

Kancharagunta Kishan Babu 6 Apr 19, 2022
FindFunc is an IDA PRO plugin to find code functions that contain a certain assembly or byte pattern, reference a certain name or string, or conform to various other constraints.

FindFunc: Advanced Filtering/Finding of Functions in IDA Pro FindFunc is an IDA Pro plugin to find code functions that contain a certain assembly or b

213 Dec 17, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
Towards Fine-Grained Reasoning for Fake News Detection

FinerFact This is the PyTorch implementation for the FinerFact model in the AAAI 2022 paper Towards Fine-Grained Reasoning for Fake News Detection (Ar

Ahren_Jin 15 Dec 15, 2022
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
Optimizing Deeper Transformers on Small Datasets

DT-Fixup Optimizing Deeper Transformers on Small Datasets Paper published in ACL 2021: arXiv Detailed instructions to replicate our results in the pap

16 Nov 14, 2022
True per-item rarity for Loot

True-Rarity True per-item rarity for Loot (For Adventurers) and More Loot A.K.A mLoot each out/true_rarity_{item_type}.json file contains probabilitie

Dan R. 3 Jul 26, 2022
Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

New Relic Experimental 3 Oct 31, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks Recent Update 2021.11.23: We release the source code of SAQ. Setup the environments Clone the re

Zhuang AI Group 30 Dec 19, 2022
Exponential Graph is Provably Efficient for Decentralized Deep Training

Exponential Graph is Provably Efficient for Decentralized Deep Training This code repository is for the paper Exponential Graph is Provably Efficient

3 Apr 20, 2022