Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

Overview

Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

If you use this code for your research, please cite our paper:

@Article{informatics8020040,
AUTHOR = {Altini, Nicola and De Giosa, Giuseppe and Fragasso, Nicola and Coscia, Claudia and Sibilano, Elena and Prencipe, Berardino and Hussain, Sardar Mehboob and Brunetti, Antonio and Buongiorno, Domenico and Guerriero, Andrea and Tatò, Ilaria Sabina and Brunetti, Gioacchino and Triggiani, Vito and Bevilacqua, Vitoantonio},
TITLE = {Segmentation and Identification of Vertebrae in CT Scans Using CNN, k-Means Clustering and k-NN},
JOURNAL = {Informatics},
VOLUME = {8},
YEAR = {2021},
NUMBER = {2},
ARTICLE-NUMBER = {40},
URL = {https://www.mdpi.com/2227-9709/8/2/40},
ISSN = {2227-9709},
DOI = {10.3390/informatics8020040}
}

Graphical Abstract: GraphicalAbstract


Materials

Dataset can be downloaded for free at this URL.


Configuration and pre-processing

Configure the file config/paths.py according to paths in your computer. Kindly note that base_dataset_dir should be an absolute path which points to the directory which contains the subfolders with images and labels for training and validating the algorithms present in this repository.

In order to perform pre-processing, execute the following scripts in the given order.

  1. Perform Train / Test split:
python run/task0/split.py --original-training-images=OTI --original-training-labels=OTL \ 
                          --original-validation-images=OVI --original-validation-labels=OVL

Where:

  • OTI is the path with the CT scan from the original dataset (downloaded from VerSe challenge, see link above);
  • OTL is the path with the labels related to the original dataset;
  • OVI is the path where test images will be put;
  • OVL is the path where test labels will be put.
  1. Cropping the splitted datasets:
python run/task0/crop_mask.py --original-training-images=OTI --original-training-labels=OTL \ 
                              --original-validation-images=OVI --original-validation-labels=OVL

Where the arguments are the same of 1).

  1. Pre-processing the cropped datasets (see also Payer et al. pre-processing):
python run/task0/pre_processing.py

Binary Segmentation

In order to perform this stage, 3D V-Net has been exploited. The followed workflow for binary segmentation is depicted in the following figure:

BinarySegmentationWorkflowImage

Training

To perform the training, the syntax is as follows:

python run/task1/train.py --epochs=NUM_EPOCHS --batch=BATCH_SIZE --workers=NUM_WORKERS \
                          --lr=LR --val_epochs=VAL_EPOCHS

Where:

  • NUM_EPOCHS is the number of epochs for which training the CNN (we often used 500 or 1000 in our experiments);
  • BATCH_SIZE is the batch size (we often used 8 in our experiments, in order to benefit from BatchNormalization layers);
  • NUM_WORKERS is the number of workers in the data loading (see PyTorch documentation);
  • LR is the learning rate,
  • VAL_EPOCHS is the number of epochs after which performing validation during training (a checkpoint model is also saved every VAL_EPOCHS epochs).

Inference

To perform the inference, the syntax is as follows:

python run/task1/segm_bin.py --path_image_in=PATH_IMAGE_IN --path_mask_out=PATH_MASK_OUT

Where:

  • PATH_IMAGE_IN is the folder with input images;
  • PATH_MASK_OUT is the folder where to write output masks.

An example inference result is depicted in the following figure:

BinarySegmentationInferenceImage

Metrics Calculation

In order to calculate binary segmentation metrics, the syntax is as follows:

python run/task1/metrics.py

Multiclass Segmentation

The followed workflow for multiclass segmentation is depicted in the following figure:

MultiClassSegmentationWorkflow

To perform the Multiclass Segmentation (can be performed only on binary segmentation output), the syntax is as follows:

python run/task2/multiclass_segmentation.py --input-path=INPUT_PATH \
                                            --gt-path=GT_PATH \
                                            --output-path=OUTPUT_PATH \
                                            --use-inertia-tensor=INERTIA \
                                            --no-metrics=NOM

Where:

  • INPUT_PATH is the path to the folder containing the binary spine masks obtained in previous steps (or binary spine ground truth).
  • GT_PATH is the path to the folder containing ground truth labels.
  • OUTPUT_PATH is the path where to write the output multiclass masks.
  • INERTIA can be either 0 or 1 depending or not if you want to include inertia tensor in the feature set for discrminating between bodies and arches (useful for scoliosis cases); default is 0.
  • NOM can be either 0 or 1 depending or not if you want to skip the calculation of multi-Hausdorff distance and multi-ASSD for the vertebrae labelling (it can be very computationally expensive with this implementation); default is 1.

Figures highlighting the different steps involved in this stage follows:

  • Morphology MultiClassSegmentationMorphology

  • Connected Components MultiClassSegmentationConnectedComponents

  • Clustering and arch/body coupling MultiClassSegmentationClustering

  • Centroids computation MultiClassSegmentationCentroids

  • Mesh reconstruction MultiClassSegmentationMesh


Visualization of the Predictions

The base_dataset_dir folder also contains the outputs folders:

  • predTr contains the binary segmentation predictions performed on training set;
  • predTs contains the binary segmentation predictions performed on testing set;
  • predMulticlass contains the multiclass segmentation predictions and the JSON files containing the centroids' positions.

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
Transformers are Graph Neural Networks!

🚀 Gated Graph Transformers Gated Graph Transformers for graph-level property prediction, i.e. graph classification and regression. Associated article

Chaitanya Joshi 46 Jun 30, 2022
Create images and texts with the First Order Generative Adversarial Networks

First Order Divergence for training GANs This repository contains code accompanying the paper First Order Generative Advesarial Netoworks The majority

Zalando Research 35 Dec 11, 2021
Deep functional residue identification

DeepFRI Deep functional residue identification Citing @article {Gligorijevic2019, author = {Gligorijevic, Vladimir and Renfrew, P. Douglas and Koscio

Flatiron Institute 156 Dec 25, 2022
Raptor-Multi-Tool - Raptor Multi Tool With Python

Promises 🔥 20 Stars and I'll fix every error that there is 50 Stars and we will

Aran 44 Jan 04, 2023
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022
[ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets"

EarlyBERT This is the official implementation for the paper in ACL-IJCNLP 2021 "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by

VITA 13 May 11, 2022
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set —— PyTorch implementation This is an unofficial offici

Sicheng Xu 833 Dec 28, 2022
Caffe-like explicit model constructor. C(onfig)Model

cmodel Caffe-like explicit model constructor. C(onfig)Model Installation pip install git+https://github.com/bonlime/cmodel Usage In order to allow usi

1 Feb 18, 2022
The codes of paper 'Active-LATHE: An Active Learning Algorithm for Boosting the Error exponent for Learning Homogeneous Ising Trees'

Active-LATHE: An Active Learning Algorithm for Boosting the Error exponent for Learning Homogeneous Ising Trees This project contains the codes of pap

0 Apr 20, 2022
A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images

BaSiC Matlab code accompanying A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images by Tingying Peng, Kurt Thorn, Timm Schr

Marr Lab 34 Dec 18, 2022
*ObjDetApp* deploys a pytorch model for object detection

*ObjDetApp* deploys a pytorch model for object detection

Will Chao 1 Dec 26, 2021
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022

CPC_DeepCluster This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEEC

LEAP Lab 2 Sep 15, 2022
Benchmarks for the Optimal Power Flow Problem

Power Grid Lib - Optimal Power Flow This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emergi

A Library of IEEE PES Power Grid Benchmarks 207 Dec 08, 2022
🎯 A comprehensive gradient-free optimization framework written in Python

Solid is a Python framework for gradient-free optimization. It contains basic versions of many of the most common optimization algorithms that do not

Devin Soni 565 Dec 26, 2022
Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them

TensorFlow Serving + Streamlit! ✨ 🖼️ Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them! This is a pretty simple S

Álvaro Bartolomé 18 Jan 07, 2023
Deep Reinforcement Learning with pytorch & visdom

Deep Reinforcement Learning with pytorch & visdom Sample testings of trained agents (DQN on Breakout, A3C on Pong, DoubleDQN on CartPole, continuous A

Jingwei Zhang 783 Jan 04, 2023
Transformers based fully on MLPs

Awesome MLP-based Transformers papers An up-to-date list of Transformers based fully on MLPs without attention! Why this repo? After transformers and

Fawaz Sammani 35 Dec 30, 2022
Unsupervised captioning - Code for Unsupervised Image Captioning

Unsupervised Image Captioning by Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo Introduction Most image captioning models are trained using paired image-se

Yang Feng 207 Dec 24, 2022