Clairvoyance: a Unified, End-to-End AutoML Pipeline for Medical Time Series

Overview

Clairvoyance: A Pipeline Toolkit for Medical Time Series


Authors: van der Schaar Lab

This repository contains implementations of Clairvoyance: A Pipeline Toolkit for Medical Time Series for the following applications.

  • Time-series prediction (one-shot and online)
  • Transfer learning
  • Individualized time-series treatment effects (ITE) estimation
  • Active sensing on time-series data
  • AutoML

All API files for those applications can be found in /api folder. All tutorials for those applications can be found in /tutorial folder.

Block diagram of Clairvoyance

Installation

There are currently two ways of installing the required dependencies: using Docker or using Conda.

Note on Requirements

  • Clairvoyance has been tested on Ubuntu 20.04, but should be broadly compatible with common Linux systems.
  • The Docker installation method is additionally compatible with Mac and Windows systems that support Docker.
  • Hardware requirements depends on the underlying ML models used, but a machine that can handle ML research tasks is recommended.
  • For faster computation, CUDA-capable Nvidia card is recommended (follow the CUDA-enabled installation steps below).

Docker installation

  1. Install Docker on your system: https://docs.docker.com/get-docker/.
  2. [Required for CUDA-enabled installation only] Install Nvidia container runtime: https://github.com/NVIDIA/nvidia-container-runtime/.
    • Assumes Nvidia drivers are correctly installed on your system.
  3. Get the latest Clairvoyance Docker image:
    $ docker pull clairvoyancedocker/clv:latest
  4. To run the Docker container as a terminal, execute the below from the Clairvoyance repository root:
    $ docker run -i -t --gpus all --network host -v $(pwd)/datasets/data:/home/clvusr/clairvoyance/datasets/data clairvoyancedocker/clv
    • Explanation of the docker run arguments:
      • -i -t: Run a terminal session.
      • --gpus all: [Required for CUDA-enabled installation only], passes your GPU(s) to the Docker container, otherwise skip this option.
      • --network host: Use your machine's network and forward ports. Could alternatively publish ports, e.g. -p 8888:8888.
      • -v $(pwd)/datasets/data:/home/clvusr/clairvoyance/datasets/data: Share directory/ies with the Docker container as volumes, e.g. data.
      • clairvoyancedocker/clv: Specifies Clairvoyance Docker image.
    • If using Windows:
      • Use PowerShell and first run the command $pwdwin = $(pwd).Path. Then use $pwdwin instead of $(pwd) in the docker run command.
    • If using Windows or Mac:
      • Due to how Docker networking works, replace --network host with -p 8888:8888.
  5. Run all following Clairvoyance API commands, jupyter notebooks etc. from within this Docker container.

Conda installation

Conda installation has been tested on Ubuntu 20.04 only.

  1. From the Clairvoyance repo root, execute:
    $ conda env create --name clvenv -f ./environment.yml
    $ conda activate clvenv
  2. Run all following Clairvoyance API commands, jupyter notebooks etc. in the clvenv environment.

Data

Clairvoyance expects your dataset files to be defined as follows:

  • Four CSV files (may be compressed), as illustrated below:
    static_test_data.csv
    static_train_data.csv
    temporal_test_data.csv
    temporal_train_data.csv
    
  • Static data file content format:
    id,my_feature,my_other_feature,my_third_feature_etc
    3wOSm2,11.00,4,-1.0
    82HJss,3.40,2,2.1
    iX3fiP,7.01,3,-0.4
    ...
    
  • Temporal data file content format:
    id,time,variable,value
    3wOSm2,0.0,my_first_temporal_feature,0.45
    3wOSm2,0.5,my_first_temporal_feature,0.47
    3wOSm2,1.2,my_first_temporal_feature,0.49
    3wOSm2,0.0,my_second_temporal_feature,10.0
    3wOSm2,0.1,my_second_temporal_feature,12.4
    3wOSm2,0.3,my_second_temporal_feature,9.3
    82HJss,0.0,my_first_temporal_feature,0.22
    82HJss,1.0,my_first_temporal_feature,0.44
    ...
    
  • The id column is required in the static data files. The id,time,variable,value columns are required in the temporal file. The IDs of samples must match between the static and temporal files.
  • Your data files are expected to be under:
    <clairvoyance_repo_root>/datasets/data/<your_dataset_name>/
    
  • See tutorials for how to define your dataset(s) in code.
  • Clairvoyance examples make reference to some existing datasets, e.g. mimic, ward. These are confidential datasets (or in case of MIMIC-III, it requires a training course and an access request) and are not provided here. Contact [email protected] for more details.

Extract data from MIMIC-III

To use MIMIC-III with Clairvoyance, you need to get access to MIMIC-III and follow the instructions for installing it in a Postgres database: https://mimic.physionet.org/tutorials/install-mimic-locally-ubuntu/

$ cd datasets/mimic_data_extraction && python extract_antibiotics_dataset.py

Usage

  • To run tutorials:
    • Launch jupyter lab: $ jupyter-lab.
      • If using Windows or Mac and following the Docker installation method, run jupyter-lab --ip="0.0.0.0".
    • Open jupyter lab in the browser by following the URL with the token.
    • Navigate to tutorial/ and run a tutorial of your choice.
  • To run Clairvoyance API from the command line, execute the appropriate command from within the Docker terminal (see example command below).

Example: Time-series prediction

To run the pipeline for training and evaluation on time-series prediction framework, simply run $ python -m api/main_api_prediction.py or take a look at the jupyter notebook tutorial/tutorial_prediction.ipynb.

Note that any model architecture can be used as the predictor model such as RNN, Temporal convolutions, and transformer. The condition for predictor model is to have fit and predict functions as its subfunctions.

  • Stages of the time-series prediction:

    • Import dataset
    • Preprocess data
    • Define the problem (feature, label, etc.)
    • Impute missing components
    • Select the relevant features
    • Train time-series predictive model
    • Estimate the uncertainty of the predictions
    • Interpret the predictions
    • Evaluate the time-series prediction performance on the testing set
    • Visualize the outputs (performance, predictions, uncertainties, and interpretations)
  • Command inputs:

    • data_name: mimic, ward, cf
    • normalization: minmax, standard, None
    • one_hot_encoding: input features that need to be one-hot encoded
    • problem: one-shot or online
    • max_seq_len: maximum sequence length after padding
    • label_name: the column name for the label(s)
    • treatment: the column name for treatments
    • static_imputation_model: mean, median, mice, missforest, knn, gain
    • temporal_imputation_model: mean, median, linear, quadratic, cubic, spline, mrnn, tgain
    • feature_selection_model: greedy-addition, greedy-deletion, recursive-addition, recursive-deletion, None
    • feature_number: selected feature number
    • model_name: rnn, gru, lstm, attention, tcn, transformer
    • h_dim: hidden dimensions
    • n_layer: layer number
    • n_head: head number (only for transformer model)
    • batch_size: number of samples in mini-batch
    • epochs: number of epochs
    • learning_rate: learning rate
    • static_mode: how to utilize static features (concatenate or None)
    • time_mode: how to utilize time information (concatenate or None)
    • task: classification or regression
    • uncertainty_model_name: uncertainty estimation model name (ensemble)
    • interpretation_model_name: interpretation model name (tinvase)
    • metric_name: auc, apr, mae, mse
  • Example command:

    $ cd api
    $ python main_api_prediction.py \
        --data_name cf --normalization minmax --one_hot_encoding admission_type \
        --problem one-shot --max_seq_len 24 --label_name death \
        --static_imputation_model median --temporal_imputation_model median \
        --model_name lstm --h_dim 100 --n_layer 2 --n_head 2 --batch_size 400 \
        --epochs 20 --learning_rate 0.001 \
        --static_mode concatenate --time_mode concatenate \
        --task classification --uncertainty_model_name ensemble \
        --interpretation_model_name tinvase --metric_name auc
  • Outputs:

    • Model prediction
    • Model performance
    • Prediction uncertainty
    • Prediction interpretation

Citation

To cite Clairvoyance in your publications, please use the following reference.

Daniel Jarrett, Jinsung Yoon, Ioana Bica, Zhaozhi Qian, Ari Ercole, and Mihaela van der Schaar (2021). Clairvoyance: A Pipeline Toolkit for Medical Time Series. In International Conference on Learning Representations. Available at: https://openreview.net/forum?id=xnC8YwKUE3k.

You can also use the following Bibtex entry.

@inproceedings{
  jarrett2021clairvoyance,
  title={Clairvoyance: A Pipeline Toolkit for Medical Time Series},
  author={Daniel Jarrett and Jinsung Yoon and Ioana Bica and Zhaozhi Qian and Ari Ercole and Mihaela van der Schaar},
  booktitle={International Conference on Learning Representations},
  year={2021},
  url={https://openreview.net/forum?id=xnC8YwKUE3k}
}

To cite the Clairvoyance alpha blog post, please use:

van Der Schaar, M., Yoon, J., Qian, Z., Jarrett, D., & Bica, I. (2020). clairvoyance alpha: the first pipeline toolkit for medical time series. [Webpages]. https://doi.org/10.17863/CAM.70020

@misc{https://doi.org/10.17863/cam.70020,
  doi = {10.17863/CAM.70020},
  url = {https://www.repository.cam.ac.uk/handle/1810/322563},
  author = {Van Der Schaar,  Mihaela and Yoon,  Jinsung and Qian,  Zhaozhi and Jarrett,  Dan and Bica,  Ioana},
  title = {clairvoyance alpha: the first pipeline toolkit for medical time series},
  publisher = {Apollo - University of Cambridge Repository},
  year = {2020}
}
Owner
van_der_Schaar \LAB
We are creating cutting-edge machine learning methods and applying them to drive a revolution in healthcare.
van_der_Schaar \LAB
In this work, we will implement some basic but important algorithm of machine learning step by step.

WoRkS continued English 中文 Français Probability Density Estimation-Non-Parametric Methods(概率密度估计-非参数方法) 1. Kernel / k-Nearest Neighborhood Density Est

liziyu0104 1 Dec 30, 2021
Repository for GNSS-based position estimation using a Deep Neural Network

Code repository accompanying our work on 'Improving GNSS Positioning using Neural Network-based Corrections'. In this paper, we present a Deep Neural

32 Dec 13, 2022
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction

Ditto: Building Digital Twins of Articulated Objects from Interaction Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu CVPR 2022, Oral Project | arxiv News 2022

UT Robot Perception and Learning Lab 78 Dec 22, 2022
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
Semantic Bottleneck Scene Generation

SB-GAN Semantic Bottleneck Scene Generation Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the f

Samaneh Azadi 41 Nov 28, 2022
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation

Attention Gated Networks (Image Classification & Segmentation) Pytorch implementation of attention gates used in U-Net and VGG-16 models. The framewor

Ozan Oktay 1.6k Dec 30, 2022
DiscoNet: Learning Distilled Collaboration Graph for Multi-Agent Perception [NeurIPS 2021]

DiscoNet: Learning Distilled Collaboration Graph for Multi-Agent Perception [NeurIPS 2021] Yiming Li, Shunli Ren, Pengxiang Wu, Siheng Chen, Chen Feng

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 98 Dec 21, 2022
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars This repository is the official implementation of Colar. In this work,

LeYang 246 Dec 13, 2022
Code for How To Create A Fully Automated AI Based Trading System With Python

AI Based Trading System This code works as a boilerplate for an AI based trading system with yfinance as data source and RobinHood or Alpaca as broker

Rubén 196 Jan 05, 2023
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)

Vision Transformer Pytorch reimplementation of Google's repository for the ViT model that was released with the paper An Image is Worth 16x16 Words: T

Eunkwang Jeon 1.4k Dec 28, 2022
Single Red Blood Cell Hydrodynamic Traps Via the Generative Design

Rbc-traps-generative-design - The generative design for single red clood cell hydrodynamic traps using GEFEST framework

Natural Systems Simulation Lab 4 Jun 16, 2022
Fit Fast, Explain Fast

FastExplain Fit Fast, Explain Fast Installing pip install fast-explain About FastExplain FastExplain provides an out-of-the-box tool for analysts to

8 Dec 15, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
POT : Python Optimal Transport

POT: Python Optimal Transport This open source Python library provide several solvers for optimization problems related to Optimal Transport for signa

Python Optimal Transport 1.7k Dec 31, 2022
A script that trains a model to recognize handwritten digits using the MNIST data set.

handwritten-digits-recognition A script that trains a model to recognize handwritten digits using the MNIST data set. Then it loads external files and

Hamza Sayih 1 Oct 30, 2021
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
code from "Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity"

Code associated with the paper "Tensor decomposition of higher-order correlations by nonlinear Hebbian learning," Ocker & Buice, Neurips 2021. "plot_f

Gabriel Koch Ocker 4 Oct 16, 2022
Supervised Classification from Text (P)

MSc-Thesis Module: Masters Research Thesis Language: Python Grade: 75 Title: An investigation of supervised classification of therapeutic process from

Matthew Laws 1 Nov 22, 2021
Udacity's CS101: Intro to Computer Science - Building a Search Engine

Udacity's CS101: Intro to Computer Science - Building a Search Engine All soluti

Phillip 0 Feb 26, 2022
Supplemental Code for "ImpressionNet :A Multi view Approach to Predict Socio Facial Impressions"

Supplemental Code for "ImpressionNet :A Multi view Approach to Predict Socio Facial Impressions" Environment requirement This code is based on Python

Rohan Kumar Gupta 1 Dec 19, 2021