This is the source code of the 1st place solution for segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

Overview

1st place solution in CCF BDCI 2021 ULSEG challenge

This is the source code of the 1st place solution for ultrasound image angioma segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

[Challenge leaderboard 🏆 ]

1 Pipeline of our solution

Our solution includes data pre-processing, network training, ensemble inference and data post-processing.

drawing

Ultrasound images of hemangioma segmentation framework

1.1 Data pre-processing

To improve our performance on the leaderboard, 5-fold cross validation is used to evaluate the performance of our proposed method. In our opinion, it is necessary to keep the size distribution of tumor in the training and validation sets. We calculate the tumor area for each image and categorize the tumor size into classes: 1) less than 3200 pixels, 2) less than 7200 pixels and greater than 3200 pixels, and 3) greater than 7200 pixels. These two thresholds, 3200 pixels and 7200 pixels, are close to the tertiles. We divide images in each size grade group into 5 folds and combined different grades of single fold into new single fold. This strategy ensured that final 5 folds had similar size distribution.

drawing

Tumors of different sizes

1.2 Network training

Due to the small size of the training set, for this competition, we chose a lightweight network structure: Linknet with efficientnet-B6 encoder. Following methods are performed in data augmentation (DA): 1) horizontal flipping, 2) vertical flipping, 3) random cropping, 4) random affine transformation, 5) random scaling, 6) random translation, 7) random rotation, and 8) random shearing transformation. In addition, one of the following methods was randomly selected for enhanced data augmentation (EDA): 1) sharpening, 2) local distortion, 3) adjustment of contrast, 4) blurring (Gaussian, mean, median), 5) addition of Gaussian noise, and 6) erasing.

1.3 Ensemble inference

We ensemble five models (five folds) and do test time augmentation (TTA) for each model. TTA generally improves the generalization ability of the segmentation model. In our framework, the TTA includes vertical flipping, horizontal flipping, and rotation of 180 degrees for the segmentation task.

1.4 Data post-processing

We post-processe the obtained binary mask by removing small isolated points (RSIP) and edge median filtering (EMF) . The edge part of our predicted tumor is not smooth enough, which is not quite in line with the manual annotation of the physician, so we adopt a small trick, i.e., we do a median filtering specifically for the edge part, and the experimental results show that this can improve the accuracy of tumor segmentation.

2 Segmentation results on 2021 CCF BDCI dataset

We test our method on 2021 CCD BDCI dataset (215 for training and 107 for testing). The segmentation results of 5-fold CV based on "Linknet with efficientnet-B6 encoder" are as following:

fold Linknet Unet Att-Unet DeeplabV3+ Efficient-b5 Efficient-b6 Resnet-34 DA EDA TTA RSIP EMF Dice (%)
1 85.06
1 84.48
1 84.72
1 84.93
1 86.52
1 86.18
1 86.91
1 87.38
1 88.36
1 89.05
1 89.20
1 89.52
E 90.32

3 How to run this code?

Here, we split the whole process into 5 steps so that you can easily replicate our results or perform the whole pipeline on your private custom dataset.

  • step0, preparation of environment
  • step1, run the script preprocess.py to perform the preprocessing
  • step2, run the script train.py to train our model
  • step3, run the script inference.py to inference the test data.
  • step4, run the script postprocess.py to perform the preprocessing.

You should prepare your data in the format of 2021 CCF BDCI dataset, this is very simple, you only need to prepare: two folders store png format images and masks respectively. You can download them from [Homepage].

The complete file structure is as follows:

  |--- CCF-BDCI-2021-ULSEG-Rank1st
      |--- segmentation_models_pytorch_4TorchLessThan120
          |--- ...
          |--- ...
      |--- saved_model
          |--- pred
          |--- weights
      |--- best_model
          |--- best_model1.pth
          |--- ...
          |--- best_model5.pth
      |--- train_data
          |--- img
          |--- label
          |--- train.csv
      |--- test_data
          |--- img
          |--- predict
      |--- dataset.py
      |--- inference.py
      |--- losses.py
      |--- metrics.py
      |--- ploting.py
      |--- preprocess.py
      |--- postprocess.py
      |--- util.py
      |--- train.py
      |--- visualization.py
      |--- requirement.txt

3.1 Step0 preparation of environment

We have tested our code in following environment:

For installing these, run the following code:

pip install -r requirements.txt

3.2 Step1 preprocessing

In step1, you should run the script and train.csv can be generated under train_data fold:

python preprocess.py \
--image_path="./train_data/label" \
--csv_path="./train_data/train.csv"

3.3 Step2 training

With the csv file train.csv, you can directly perform K-fold cross validation (default is 5-fold), and the script uses a fixed random seed to ensure that the K-fold cv of each experiment is repeatable. Run the following code:

python train.py \
--input_channel=1 \
--output_class=1 \
--image_resolution=256 \
--epochs=100 \
--num_workers=2 \
--device=0 \
--batch_size=8 \
--backbone="efficientnet-b6" \
--network="Linknet" \
--initial_learning_rate=1e-7 \
--t_max=110 \
--folds=5 \
--k_th_fold=1 \
--fold_file_list="./train_data/train.csv" \
--train_dataset_path="./train_data/img" \
--train_gt_dataset_path="./train_data/label" \
--saved_model_path="./saved_model" \
--visualize_of_data_aug_path="./saved_model/pred" \
--weights_path="./saved_model/weights" \
--weights="./saved_model/weights/best_model.pth" 

By specifying the parameter k_th_fold from 1 to folds and running repeatedly, you can complete the training of all K folds. After each fold training, you need to copy the .pth file from the weights path to the best_model folder.

3.4 Step3 inference (test)

Before running the script, make sure that you have generated five models and saved them in the best_model folder. Run the following code:

python inference.py \
--input_channel=1 \
--output_class=1 \
--image_resolution=256 \
--device=0 \
--backbone="efficientnet-b6" \
--network="Linknet" \
--weights1="./saved_model/weights/best_model1.pth" \
--weights2="./saved_model/weights/best_model2.pth" \
--weights3="./saved_model/weights/best_model3.pth" \
--weights4="./saved_model/weights/best_model4.pth" \
--weights5="./saved_model/weights/best_model5.pth" \
--test_path="./test_data/img" \
--saved_path="./test_data/predict" 

The results of the model inference will be saved in the predict folder.

3.5 Step4 postprocessing

Run the following code:

python postprocess.py \
--image_path="./test_data/predict" \
--threshood=50 \
--kernel=20 

Alternatively, if you want to observe the overlap between the predicted result and the original image, we also provide a visualization script visualization.py. Modify the image path in the code and run the script directly.

drawing

Visualization of tumor margins

4 Acknowledgement

  • Thanks to the organizers of the 2021 CCF BDCI challenge.
  • Thanks to the 2020 MICCCAI TNSCUI TOP 1 for making the code public.
  • Thanks to qubvel, the author of smg and ttach, all network and TTA used in this code come from his implement.
Owner
Chenxu Peng
Data Science, Deep Learning
Chenxu Peng
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 655 Jan 04, 2023
PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Learning Character-Agnostic Motion for Motion Retargeting in 2D We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for

Rundi Wu 367 Dec 22, 2022
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
A Graph Neural Network Tool for Recovering Dense Sub-graphs in Random Dense Graphs.

PYGON A Graph Neural Network Tool for Recovering Dense Sub-graphs in Random Dense Graphs. Installation This code requires to install and run the graph

Yoram Louzoun's Lab 0 Jun 25, 2021
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

Yandex 662 Nov 20, 2022
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations This is the repository for the paper Consumer Fairness in Recomm

7 Nov 30, 2022
《Lerning n Intrinsic Grment Spce for Interctive Authoring of Grment Animtion》

Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation Overview This is the demo code for training a motion invariant enco

YuanBo 213 Dec 14, 2022
NR-GAN: Noise Robust Generative Adversarial Networks

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

Takuhiro Kaneko 59 Dec 11, 2022
A Python implementation of active inference for Markov Decision Processes

A Python package for simulating Active Inference agents in Markov Decision Process environments. Please see our companion preprint on arxiv for an ove

235 Dec 21, 2022
A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes.

OMNI A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes. Why? When I finished my Kubernetes cluster using a few Raspber

Matias Godoy 148 Dec 29, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 03, 2023
Differentiable Annealed Importance Sampling (DAIS)

Differentiable Annealed Importance Sampling (DAIS) This repository contains the code to reproduce the DAIS results from the paper Differentiable Annea

Guodong Zhang 6 Dec 26, 2021
Official code repository for the EMNLP 2021 paper

Integrating Visuospatial, Linguistic and Commonsense Structure into Story Visualization PyTorch code for the EMNLP 2021 paper "Integrating Visuospatia

Adyasha Maharana 23 Dec 19, 2022
Fine-tuning StyleGAN2 for Cartoon Face Generation

Cartoon-StyleGAN 🙃 : Fine-tuning StyleGAN2 for Cartoon Face Generation Abstract Recent studies have shown remarkable success in the unsupervised imag

Jihye Back 520 Jan 04, 2023
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
My take on a practical implementation of Linformer for Pytorch.

Linformer Pytorch Implementation A practical implementation of the Linformer paper. This is attention with only linear complexity in n, allowing for v

Peter 349 Dec 25, 2022
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Deep Reinforcement Learning Agents This repository contains a collection of reinforcement learning algorithms written in Tensorflow. The ipython noteb

Arthur Juliani 2.2k Jan 01, 2023
Label-Free Model Evaluation with Semi-Structured Dataset Representations

Label-Free Model Evaluation with Semi-Structured Dataset Representations Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch

8 Oct 06, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022