Official release of MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis of Pancreatic Cancer axriv: http://arxiv.org/abs/2112.13513

Overview

MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis

This is the official page of the MSHT with its experimental script and records. We dedicate to the open-source concept and wish the schoolers can be benefited from our release.

The trained models and the dataset are not available publicly due to the requirement of Peking Union Medical College Hospital (PUMCH).

background

Rapid-onsite evaluation (ROSE) is a clinical innovation used to diagnose pancreatic cancer. In the ROSE diagnosis process, EUS-FNA surgery is used to obtain cell samples equipped with diff-quick technic to stain the samples, in the meantime, an on-site pathologist can determine the condition based on the views. However, the requirement of on-site pathologists leads to limitations in the expansion of this revolutionary method. Much more life can be saved if an AI system can help the onsite pathologists by doing their job. By enabling the ROSE process without the onsite pathologists, ROSE surgery can be expanded wildly since many hospitals currently are limited by the lack of onsite pathologists.

In histology and cytopathology, convolutional neural networks (CNN) performed robustly and achieved good generalisability by the inductive bias of regional related areas. In the analysis of ROSE images, the local features are pivotal since the shapes and the nucleus size of the cells can be used in identifying the cancerous cells from their counterparts. However, the global features of the cells, including the relative size and arrangements, are also essential in distinguishing between the positive and negative samples. Meanwhile, the requirement of more robust performance and better constraining under the limited dataset size is also challenging when dealing with the medical dataset. The cutting-edge Transformer modules performed excellently in recent CV tasks, which presented striking sound global modeling by the attention mechanism. Despite its strength, Transformers usually require a large dataset to perform full power which is currently not possibile in many medical-data-based tasks.

Therefore, an idea of hybridising the Transformer with a robust CNN backbone can be easily drawn out to improve the local-global modeling process.

MSHT model

The proposed Multi-stage Hybrid Transformer (MSHT) is designed for pancreatic cancer’s cytopathological images analysis. Along with clinical innovation strategy ROSE, MSHT aims for a faster and pathologist free trend in pancreatic cancer’s diagnoses. The main idea is to concordantly encode local features and bias of the early-stage CNNs into the global modeling process of the Transformer. MSHT comprises a CNN backbone that generates the feature maps from different stages and a focus-guided Decoder structure (FGD) which works on global modeling with local attention information.

MSHT Fig 1

Inspired by the gaze and glance of human eyes, we designed the FGD Focus block to obtain attention guidance. In the Focus block, the feature maps from different CNN stages can be transformed to attention guidance. Combined of prominent and general information, the output sequence can help the transformer decoders in the global modeling. The Focus is stacking up by: 1.An attention block 2.a dual path pooling layer 3. projecting 1x1 CNN Focus

Meanwhile, a new decoder is created to work with the attention guidance from CNN stages. We use the MHGA(multi-head guided attention) to capture the prominent and general attention information and encode them through the transformer modeling process.

Decoder

Experimental result

Model Acc Specificity Sensitivity PPV NPV F1_score
ResNet50 95.0177096 95.5147059 94.1254125 92.1702818 96.6959145 93.1175649
VGG-16 94.9232586 95.6617647 93.5973597 92.4202662 96.4380638 92.9517884
VGG-19 94.8288076 96.0294118 92.6732673 93.0172654 95.9577772 92.7757736
Efficientnet_b3 93.2939787 95.4779412 89.3729373 91.8015468 94.1863486 90.5130405
Efficientnet_b4 90.9090909 94.4117647 84.620462 89.4313858 91.6892225 86.9433552
Inception V3 93.837072 94.4852941 92.6732673 90.3515408 95.8628556 91.4941479
Xception 94.6871311 96.0661765 92.2112211 92.9104126 95.6827139 92.5501388
Mobilenet V3 93.4356553 95.1102941 90.4290429 91.1976193 94.6950621 90.7970552
ViT (base) 94.498229 95.2573529 93.1353135 91.6291799 96.1415203 92.3741742
DeiT (base) 94.5218418 95.0367647 93.5973597 91.340846 96.4118682 92.4224823
Swin Transformer (base) 94.9232586 95.1838235 94.4554455 91.7376454 96.8749621 93.0308148
MSHT (Ours) 95.6788666 96.9485294 93.3993399 94.5449211 96.3529107 93.9414631

Abalation studies

Information Model Acc Specificity Sensitivity PPV NPV F1_score
directly stack Hybrid1_384_401_lf25_b8 94.8996458 95.5882353 93.6633663 92.2408235 96.4483431 92.9292015
3 satge design Hybrid3_384_401_lf25_b8 94.7343566 96.5441176 91.4851485 93.6616202 95.3264167 92.5493201
no class token Hybrid2_384_No_CLS_Token_401_lf25_b8 94.8524203 96.25 92.3432343 93.2412276 95.7652112 92.7734486
no positional encoding Hybrid2_384_No_Pos_emb_401_lf25_b8 94.7107438 96.1029412 92.2112211 92.9958805 95.7084196 92.5636149
no attention module Hybrid2_384_No_ATT_401_lf25_b8 94.5218418 95.4411765 92.8712871 91.9562939 96.0234692 92.3824865
SE attention module Hybrid2_384_SE_ATT_401_lf25_b8 94.7107438 96.25 91.9471947 93.2475287 95.5635663 92.5598981
CBAM attention module Hybrid2_384_CBAM_ATT_401_lf25_b8 95.1121606 95.9558824 93.5973597 92.8351294 96.4240051 93.2000018
No PreTrain Hybrid2_384_401_lf25_b8 95.3010626 96.2132353 93.6633663 93.2804336 96.4716772 93.4504212
different lr Hybrid2_384_401_PT_lf_b8 95.3719008 96.25 93.7953795 93.3623954 96.5397241 93.5582297
MSHT (Ours) Hybrid2_384_401_PT_lf25_b8 95.6788666 96.9485294 93.3993399 94.5449211 96.3529107 93.9414631

Imaging results of MSHT

Focus on the interpretability, the MSHT performs well when visualizing its attention area by grad CAM technique. Screen Shot 2021-12-08 at 2 48 27 PM

  • For most cases, as shown in the figure, MSHT can correctly distinguish the samples focusing on the area like the senior pathologists, which outperform most counterparts.

Screen Shot 2021-11-05 at 3 20 23 PM

  • Additionally, the misclassification problem is yet to be overcome, by taking 2 examples.

A few positive samples were misclassified to their negative counterparts. Compared with senior pathologists, the small number of the cells made MSHT difficult to distinct cancer cells by its arrangement and relative size information.

Screen Shot 2021-10-30 at 2 34 03 PM

A specific image was misclassified to positive condition by 3 of the 5-fold models. By the analysis of senior pathologists, the reason can be revealed on the fluctuation of the squeezed sample, which misleads MSHT by the shape of the cells.

Screen Shot 2021-10-30 at 2 34 10 PM

File structure

This repository is built based on timm and pytorch 1.9.0+cu102

We firstly use the Pretrain.py script to pretrain the model on the Imagenet-1k dataset and then use the pre-trained model for 5-fold experiment with Train.py

All implimentation details are setting as the default hyperparameter in the ArgumentParser in the end of our code.

The colab script is presented for your convenience.

Owner
Tianyi Zhang
Tianyi Zhang
NeuralDiff: Segmenting 3D objects that move in egocentric videos

NeuralDiff: Segmenting 3D objects that move in egocentric videos Project Page | Paper + Supplementary | Video About This repository contains the offic

Vadim Tschernezki 14 Dec 05, 2022
Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models

Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models, under review at ICLR 2017 requirements: T

Shuangfei Zhai 18 Mar 05, 2022
Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

Twitter Research 227 Jan 05, 2023
Motion Reconstruction Code and Data for Skills from Videos (SFV)

Motion Reconstruction Code and Data for Skills from Videos (SFV) This repo contains the data and the code for motion reconstruction component of the S

268 Dec 01, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
Code to reproduce the results for Compositional Attention

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 58 Nov 30, 2022
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
The code uses SegFormer for Semantic Segmentation on Drone Dataset.

SegFormer_Segmentation The code uses SegFormer for Semantic Segmentation on Drone Dataset. The details for the SegFormer can be obtained from the foll

Dr. Sander Ali Khowaja 1 May 08, 2022
LabelImg is a graphical image annotation tool.

LabelImgPlus LabelImg is a graphical image annotation tool. This project is not updated with new functions now. More functions are supported with Labe

lzx1413 200 Dec 20, 2022
💡 Learnergy is a Python library for energy-based machine learning models.

Learnergy: Energy-based Machine Learners Welcome to Learnergy. Did you ever reach a bottleneck in your computational experiments? Are you tired of imp

Gustavo Rosa 57 Nov 17, 2022
This is the official code of L2G, Unrolling and Recurrent Unrolling in Learning to Learn Graph Topologies.

Learning to Learn Graph Topologies This is the official code of L2G, Unrolling and Recurrent Unrolling in Learning to Learn Graph Topologies. Requirem

Stacy X PU 16 Dec 09, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"

Anytime Autoregressive Model Anytime Sampling for Autoregressive Models via Ordered Autoencoding , ICLR 21 Yilun Xu, Yang Song, Sahaj Gara, Linyuan Go

Yilun Xu 22 Sep 08, 2022
Official Pytorch implementation of 'RoI Tanh-polar Transformer Network for Face Parsing in the Wild.'

Official Pytorch implementation of 'RoI Tanh-polar Transformer Network for Face Parsing in the Wild.'

Jie Shen 125 Jan 08, 2023
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 03, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

Densely Connected Convolutional Networks (DenseNets) This repository contains the code for DenseNet introduced in the following paper Densely Connecte

Zhuang Liu 4.5k Jan 03, 2023
Code for intrusion detection system (IDS) development using CNN models and transfer learning

Intrusion-Detection-System-Using-CNN-and-Transfer-Learning This is the code for the paper entitled "A Transfer Learning and Optimized CNN Based Intrus

Western OC2 Lab 38 Dec 12, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023