A medical imaging framework for Pytorch

Overview
Comments
  • Adding support for multiple image formats (.dcm etc) in dataloaders

    Adding support for multiple image formats (.dcm etc) in dataloaders

    I was looking into another popular Tensorflow based medical library implementation NiftyNet , specifically the dataloaders and really liked the idea of multiple image loaders . Any plans on implementing the same ?

    As an initial guess we can dynamically pass the various functions for loading image formats in the input_handle and gt_handle

    self.input_handle = nib.load(self.input_filename) self.gt_handle = nib.load(self.gt_filename)

    We may need to make some changes as i saw that the slicing functionality depends on the nibabel/nifty format. I can start of the implementation if you are fine and may be later we can review.

    If you have any further ideas, i can help over the same.

    Thanks, Mohit

    opened by MohitTare 4
  • Notebook in

    Notebook in "Getting started" page does not open

    Hello, I tried to open the notebook indicated in the Getting Started page, but I get the following error in the Collab website: Notebook loading error

    There was an error loading this notebook. Ensure that the file is accessible and try again. Failed to execute 'json' on 'Response': body stream is locked

    I'm using the Brave browser under Linux, if that helps. Thanks!

    opened by omendezmorales 3
  • Released new transforms

    Released new transforms

    Changelog

    1. Locally tested the implementations of the Clahe and the Histogram Clipping Transforms, we tested it against the ACDC 2017 dataset.
    2. Refactored Clahe and Histogram Clipping transform from previous commits.
    3. Implemented the Square Padding Transform (more details below).
    4. Implemented the RangeMappingMRI2D Transform (more details below; a new proposal for name would be gladly welcomed).
    5. Added the possibility of applying the Clahe and Histogram Clipping transforms into labeled samples, although we understand that the default behavior should be False.

    Square Padding Transform

    Given the output size N, it will pad the matrix along the shortest 2D axis and then resize to the output size N. For example:

    3 by 5 matrix and output size = 8.

    3 x 5 -> 5 x 5 (with padding) -> 8 x 8 using np.resize. 4 x 3 -> 4 x 4 (with padding) -> 8 x 8 using np.resize.

    It is very important that the output size is bigger than any of the inputs. We used it for the ACDC 2017 2D MRI dataset where each patient has a variable number of slices, height, and width of each slice.

    Range Mapping MRI 2D Transform

    This basically maps the 2D MRI values to a new max_value, for example:

    we have slices that go from 0 to 16384, and by using this transform we can easily change their max value to go from 0 to 1. This is useful when using the Scikit implementation of the Clahe transform that implicitly asks for images in the -1 and 1 range.

    opened by asciidiego 2
  • Bugfix: Clahe and HistogramClipping refactor.

    Bugfix: Clahe and HistogramClipping refactor.

    1. Added labeled keyword to new transforms.
    2. Refactor main functionality of each transform to a class method.
    3. Since the input can be a numpy array or a PIL image, the np.asarray makes the transformation robust to PIL inputs.
    opened by asciidiego 2
  • Adding 3D specifics Dataloaders, transforms and model

    Adding 3D specifics Dataloaders, transforms and model

    This version contains some necessary functions to make a simple pipeline to train and use a 3D U-Net model.

    This version contains :

    • 2 data loaders (MRI3DSegmentationDataset, MRI3DSubVolumeSegmentationDataset), the first one just return the images/gt as tensors (the whole volume). The second one split the volume into several subelements, which is necessary to run U-Net 3D without using dozens of VRAM Go.
    • New transforms (NormalizeInstance3D, RandomRotation3D, RandomReverse3D).
    • A 3D U-Net model
    • Also updated some deprecated functions usage in the U-Net model
    opened by morvan-s 2
  • Medical image type

    Medical image type

    Hello,

    Thanks for your work! I have a question about the type of medical images, generally they were collected in dicom type(.dcm), do you create any dataloaders for dicom type inputs? As I know, there is a python library for such image type, it was used to convert .dcm to numpy, do you use it?

    question 
    opened by Yifeifr 2
  • digital-copyright

    digital-copyright

    Hi perone!👋, I added this optional feature to digitally sign you source-code and track it on a blockchain node should you ever be audited or experience a software supply-chain attack. Simply compare the byte encrypted signature on your .git binary with the hash written to your immutable blockchain node. If they ever differ you should escalate. See the perone-digital-copyright for complete instructions on accessing your hash.. Feel free to contact me directly to review any questions before accepting. ~~Best: [email protected]

    opened by JudeSafo 1
  • Implement undo_transform for RandomRotation and RandomRotation3D

    Implement undo_transform for RandomRotation and RandomRotation3D

    Implement undo_transform for RandomRotation and RandomRotation3D.

    1. Save the angle in the metadata
    2. undo_transform performs a rotation with the opposite angle.
    opened by charleygros 1
  • 3D Transformations?

    3D Transformations?

    are 3D transformations supported? it is not clear to my from the documentation and examples, and from looking at the code i'd guess its not the case? if they are, could you update the docs? if not, is anyone working on it? (maybe i'll add some basic transformations)

    opened by aydindemircioglu 1
  • import re for line 480

    import re for line 480

    flake8 testing of https://github.com/perone/medicaltorch on Python 3.7.0

    $ _flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

    ./examples/gmchallenge_unet.py:107:42: E999 SyntaxError: invalid syntax
                var_gt = gt_samples.cuda(async=True)
                                             ^
    ./medicaltorch/datasets.py:479:16: F821 undefined name 're'
                if re.search('[SaUO]', elem.dtype.str) is not None:
                   ^
    ./medicaltorch/transforms.py:26:36: F821 undefined name 'img'
                img = t.undo_transform(img)
                                       ^
    1     E999 SyntaxError: invalid syntax
    2     F821 undefined name 're'
    
    opened by cclauss 1
  • ‘async’ is a reserved word in Python >= 3.7

    ‘async’ is a reserved word in Python >= 3.7

    flake8 testing of https://github.com/perone/medicaltorch on Python 3.7.0

    $ _flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

    ./examples/gmchallenge_unet.py:107:42: E999 SyntaxError: invalid syntax
                var_gt = gt_samples.cuda(async=True)
                                             ^
    ./medicaltorch/datasets.py:479:16: F821 undefined name 're'
                if re.search('[SaUO]', elem.dtype.str) is not None:
                   ^
    ./medicaltorch/transforms.py:26:36: F821 undefined name 'img'
                img = t.undo_transform(img)
                                       ^
    1     E999 SyntaxError: invalid syntax
    2     F821 undefined name 're'
    
    enhancement 
    opened by cclauss 1
  • Import Errors in Datasets Class

    Import Errors in Datasets Class

    Hi,

    When using the latest version of medicaltorch (or at least, the one installed by pip), importing the datasets class into the program raises the following error:

    from torch._six import string_classes, int_classes                                   
    ImportError: cannot import name 'int_classes' from 'torch._six'
    

    I've found that this can be fixed by removing int_classes in the following line in datasets.py:

    from torch._six import string_classes, int_classes
    

    and, instead, declaring int_classes = int.

    opened by Birb12 0
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In medicaltorch, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    nibabel>=2.2.1
    scipy>=1.0.0
    numpy>=1.14.1
    torch>=0.4.0
    torchvision>=0.2.1
    tqdm>=4.23.0
    scikit-image==0.15.0
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency scipy can be changed to >=0.19.0,<=1.7.3. The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the scipy
    scipy.spatial.distance.directed_hausdorff
    scipy.ndimage.filters.gaussian_filter
    scipy.ndimage.interpolation.map_coordinates
    scipy.spatial.distance.dice
    scipy.spatial.distance.jaccard
    
    The calling methods from the tqdm
    tqdm.tqdm.set_postfix
    tqdm.tqdm
    
    The calling methods from the all methods
    self.up3
    self.mp3
    self.conv1a
    f.read
    re.search
    self.branch4a_bn
    DownConv
    isinstance
    numpy.arange
    ValueError
    scipy.spatial.distance.directed_hausdorff
    self.conv3
    self.dc5
    torch.LongTensor
    numpy.any
    numpy.copy
    range
    numpy.allclose
    torch.from_numpy
    self.branch4a_drop
    self.ec2
    self.mp1
    index.self.handlers.get_pair_data
    torch.nn.BatchNorm2d
    numpy.sqrt
    self.branch5b_bn
    self.metadata.keys
    training_mean.input_data.pow.sum
    torch.stack
    torch.nn.LeakyReLU
    self.input_handle.header.get_zooms
    self.conv2_bn
    torchvision.transforms.functional.pad
    numpy.float32
    input.view
    self.conv1b_bn
    numpy.zeros
    input_data.np.flip.copy
    torchvision.transforms.functional.rotate
    self.sample_transform
    type
    self.slice_filter_fn
    numpy.random.uniform
    len
    tflat.iflat.sum
    medicaltorch.transforms.ToTensor
    self.conv9
    self.up_conv
    self.branch1a
    SegmentationPair2D.get_pair_slice
    prediction.flatten
    self.dc4
    self.branch2a
    self.branch4b_bn
    noise.astype.astype
    self.result_dict.items
    target.index_select
    self.threshold.target.torch.gt.float.view
    f.read.splitlines
    mt_collate
    self.branch3b_bn
    self.branch1a_bn
    numpy.random.random
    self.branch1b_drop
    self.branch3a
    self.branch3b_drop
    self.input_handle.header.get_data_shape
    self._build_train_input_filename
    self.gt_handle.header.get_data_shape
    self.conv2a_bn
    PIL.Image.fromarray.resize
    torch.nn.functional.avg_pool2d
    self.ec0
    sample_data.numpy
    self.branch3b
    self.amort
    self.conv2b_drop
    self.branch1a_drop
    error_msg.format
    os.path.dirname
    self.up1
    torchvision.transforms.functional.center_crop
    self.input_handle.get_fdata
    target.index_select.view
    numpy.squeeze
    self.branch4b_drop
    int
    self.ec3
    Mock
    nibabel.as_closest_canonical
    self.branch3a_bn
    os.path.exists
    self.branch1b
    SegmentationPair2D
    UpConv
    numpy.divide
    target.view
    self.input_handle.get_fdata.numel
    torch.nn.Conv2d
    PIL.Image.fromarray.mean
    self.propagate_params
    self.Unet.super.__init__
    self.batch.items
    self.branch2a_bn
    collections.defaultdict
    self.input_handle.get_fdata.sum
    self.down_conv
    torch.gt
    sys.path.insert
    numeric_score
    input.size
    masking.squeeze.sum
    self.branch2b_drop
    i.self.handlers.get_pair_data
    self.up2
    self.branch4a
    coord.self.handlers.get_pair_data
    tqdm.tqdm
    NotImplementedError
    self.indexes.append
    self.mp2
    self.dc3
    torch.nn.functional.relu
    indices.image.map_coordinates.reshape
    self.conv4
    self._prepare_indexes
    self.get_pair_data
    DatasetManager
    self.branch2b
    self.branch5b
    torchvision.transforms.functional.to_tensor
    self.conv2b_bn
    self.dc1
    SampleMetadata
    self.gt_handle.header.get_zooms
    labeled_target.view.sum
    self.dc8
    skimage.exposure.equalize_adapthist
    torch.is_tensor
    self.UNet3D.super.__init__
    torch.cat
    format
    numpy.random.randint
    self.transform
    PIL.Image.fromarray.std
    self.ec7
    self.branch3a_drop
    setuptools.setup
    self.downconv.size
    setuptools.find_packages
    elem.dtype.name.startswith
    scipy.ndimage.filters.gaussian_filter
    torch.nn.Dropout2d
    masking.sum.sum
    self.conv1b_drop
    self.conv2b
    scipy.spatial.distance.dice
    numpy.isnan
    elem.dtype.name.__numpy_type_map
    self.conv2a_drop
    self.conv1a_bn
    torch.DoubleTensor
    numpy.reshape
    torch.nn.ConvTranspose3d
    codecs.open
    self.branch5a
    torch.nn.Conv3d
    torch.nn.MaxPool3d
    RuntimeError
    masking.squeeze.nonzero
    list
    self.prediction
    self.conv2_drop
    os.path.join
    groundtruth.flatten
    numpy.meshgrid
    self.amort_bn
    numpy.random.rand
    torchvision.transforms.functional.affine
    numpy.round
    input.index_select
    self.dc2
    self.sample_augment.append
    self.dc0
    scipy.ndimage.interpolation.map_coordinates
    masking.nonzero.squeeze
    self.conv2a
    self.ec5
    map
    TypeError
    tqdm.tqdm.set_postfix
    self.sample_augment
    self.branch1b_bn
    self.transform.undo_transform
    self._load_filenames
    torch.nn.Sequential
    self.label_augment
    self.get_params
    input.index_select.view
    scipy.spatial.distance.jaccard
    self.conv1a_drop
    self.DownConv.super.__init__
    round
    self.handlers.append
    self.UpConv.super.__init__
    self.dc9
    SegmentationPair2D.get_pair_shapes
    numpy.transpose
    self.downconv
    os.path.abspath
    numpy.percentile
    self.gt_handle.get_fdata
    numpy.array
    self.conv2
    self.pool0
    numpy.flip
    self.conv1_drop
    self.ec1
    self.filename_pairs.append
    torchvision.transforms.functional.normalize
    self.branch5a_bn
    self.branch5b_drop
    self.ec4
    self.elastic_transform
    numpy.sum
    self.branch2b_bn
    super.__init__
    self.concat_bn
    torch.sigmoid
    diff_conf.mean
    self.ec6
    global_pool.expand.expand
    t.undo_transform
    self.threshold.target.torch.gt.float
    self.branch2a_drop
    numpy.random.normal
    self.branch4b
    labeled_input.view.sum
    self.conv1
    self.get_pair_shapes
    self.dc6
    PIL.Image.fromarray
    self.branch5a_drop
    self.amort_drop
    nibabel.load
    numpy.sqrt.item
    self.conv1_bn
    torch.nn.MaxPool2d
    sample.update
    self.dc7
    self.pool2
    self.concat_drop
    training_mean.input_data.pow
    metric_fn
    self.conv1b
    self.pool1
    training_mean.item
    zip
    unittest.mock.MagicMock
    super
    numpy.asarray
    masking.squeeze.squeeze
    gt_data.np.flip.copy
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • dice score greater than 100

    dice score greater than 100

    I have been trying to run the example code on SCGMChallenge dataset I see that the dice score is computed using scipy Since the preds and gt_npy are not boolean arrays the outcome of dice dissimilarity is sometimes negative d -0.1138425519461516 Then the dice score (1-d) is more than one as below d1 1.1138425519461517

    The resultant is that the dice score is more than 100

    opened by kumartr 0
  • Issues and any examples for using 3D MRI Datasets and Transformation?

    Issues and any examples for using 3D MRI Datasets and Transformation?

    Hello all.

    May I know if how to use the captioned functions that was recently added?

    I could not find any examples or guide to follow. Very much appreciated!

    Here is my code:

    filenames = namedtuple('filenames', 'input_filename gt_filename') filenametuple = filenames(mri_input_filename, mri_gt_filename)

    pair = mt_datasets.MRI3DSegmentationDataset(filenametuple)

    and it gives out the following output:

    338 
    339     def _load_filenames(self):
    

    --> 340 for input_filename, gt_filename in self.filename_pairs: 341 segpair = SegmentationPair2D(input_filename, gt_filename, 342 self.cache, self.canonical)

    ValueError: too many values to unpack (expected 2)

    opened by arvinhui 0
Releases(v0.2)
Owner
Christian S. Perone
Machine Learning Engineering / Research
Christian S. Perone
Unofficial implementation of PatchCore anomaly detection

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industri

Changwoo Ha 268 Dec 22, 2022
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
The implementation code for "DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction"

DAGAN This is the official implementation code for DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruct

TensorLayer Community 159 Nov 22, 2022
Alpha-Zero - Telegram Group Manager Bot Written In Python Using Pyrogram

✨ Alpha Zero Bot ✨ Telegram Group Manager Bot + Userbot Written In Python Using

1 Feb 17, 2022
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

[ICLR'21] DARTS-: Robustly Stepping out of Performance Collapse Without Indicators [openreview] Authors: Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun

55 Nov 01, 2022
git《Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser》(2021) GitHub: [fig5]

Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser Abstract The success of deep denoisers on real-world colo

Yue Cao 51 Nov 22, 2022
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Simple ray intersection library similar to coldet - succedeed by libacc

Ray Intersection This project offers a header only acceleration structure library including implementations for a BVH- and KD-Tree. Applications may i

Nils Moehrle 29 Jun 23, 2022
Deep learning for Engineers - Physics Informed Deep Learning

SciANN: Neural Networks for Scientific Computations SciANN is a Keras wrapper for scientific computations and physics-informed deep learning. New to S

SciANN 195 Jan 03, 2023
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation This repo is the official implementation of "MHFormer: Multi-Hypothesis Transforme

Vegetabird 281 Jan 07, 2023
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Rishikesh (ऋषिकेश) 31 Dec 08, 2022
Weakly-supervised object detection.

Wetectron Wetectron is a software system that implements state-of-the-art weakly-supervised object detection algorithms. Project CVPR'20, ECCV'20 | Pa

NVIDIA Research Projects 342 Jan 05, 2023
The official code of "SCROLLS: Standardized CompaRison Over Long Language Sequences".

SCROLLS This repository contains the official code of the paper: "SCROLLS: Standardized CompaRison Over Long Language Sequences". Links Official Websi

TAU NLP Group 39 Dec 23, 2022
The repo of the preprinting paper "Labels Are Not Perfect: Inferring Spatial Uncertainty in Object Detection"

Inferring Spatial Uncertainty in Object Detection A teaser version of the code for the paper Labels Are Not Perfect: Inferring Spatial Uncertainty in

ZINING WANG 21 Mar 03, 2022
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

943 Jan 07, 2023
Split Variational AutoEncoder

Split-VAE Split Variational AutoEncoder Introduction This repository contains and implemementation of a Split Variational AutoEncoder (SVAE). In a SVA

Andrea Asperti 2 Sep 02, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking

One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking This is an official implementation for NEAS presented in CVPR

Multimedia Research 19 Sep 08, 2022
Multi-Modal Machine Learning toolkit based on PaddlePaddle.

简体中文 | English PaddleMM 简介 飞桨多模态学习工具包 PaddleMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 PaddleMM 初始版本 v1.0 特性 丰富的任务

njustkmg 520 Dec 28, 2022
Guiding evolutionary strategies by (inaccurate) differentiable robot simulators @ NeurIPS, 4th Robot Learning Workshop

Guiding Evolutionary Strategies by Differentiable Robot Simulators In recent years, Evolutionary Strategies were actively explored in robotic tasks fo

Vladislav Kurenkov 4 Dec 14, 2021