ULMFiT for Genomic Sequence Data

Overview

Genomic ULMFiT

This is an implementation of ULMFiT for genomics classification using Pytorch and Fastai. The model architecture used is based on the AWD-LSTM model, consisting of an embedding, three LSTM layers, and a final set of linear layers.

The ULMFiT approach uses three training phases to produce a classification model:

  1. Train a language model on a large, unlabeled corpus
  2. Fine tune the language model on the classification corpus
  3. Use the fine tuned language model to initialize a classification model

This method is particularly advantageous for genomic data, where large amounts of unlabeled data is abundant and labeled data is scarce. The ULMFiT approach allows us to train a model on a large, unlabeled genomic corpus in an unsupervised fashion. The pre-trained language model serves as a feature extractor for parsing genomic data.

Typical deep learning approaches to genomics classification are highly restricted to whatever labeled data is available. Models are usually trained from scratch on small datasets, leading to problems with overfitting. When unsupervised pre-training is used, it is typically done only on the classification dataset or on synthetically generated data. The Genomic-ULMFiT approach uses genome scale corpuses for pre-training to produce better feature extractors than we would get by training only on the classification corpus.

For a deep dive into the ULMFiT approach, model architectures, regularization and training strategies, see the Methods Long Form document in the Methods section.

Results

Performance of Genomic-ULMFiT relative to other methods

Promoter Classification

E. coli promoters

The Genomic-ULMFiT method performs well at the task of classifying promoter sequences from random sections of the genome. The process of unsupervised pre-training and fine-tuning has a clear impact on the performance of the classification model

Model Accuracy Precision Recall Correlation Coefficient
Naive 0.834 0.847 0.816 0.670
E. coli Genome Pre-Training 0.919 0.941 0.893 0.839
Genomic Ensemble Pre-Training 0.973 0.980 0.966 0.947

Data generation described in notebook

Notebook Directory

Classification performance on human promoters is competitive with published results

Human Promoters (short)

For the short promoter sequences, using data from Recognition of Prokaryotic and Eukaryotic Promoters using Convolutional Deep Learning Neural Networks:

Model DNA Size kmer/stride Accuracy Precision Recall Correlation Coefficient Specificity
Kh et al. -200/50 - - - 0.9 0.89 0.98
Naive Model -200/50 5/2 0.80 0.74 0.80 0.59 0.80
With Pre-Training -200/50 5/2 0.922 0.963 0.849 0.844 0.976
With Pre-Training and Fine Tuning -200/50 5/2 .977 .959 .989 .955 .969
With Pre-Training and Fine Tuning -200/50 5/1 .990 .983 .995 .981 .987
With Pre-Training and Fine Tuning -200/50 3/1 .995 .992 .996 .991 .994

Data Source

Notebook Directory

Human Promoters (long)

For the long promoter sequences, using data from PromID: Human Promoter Prediction by Deep Learning:

Model DNA Size Models Accuracy Precision Recall Correlation Coefficient
Umarov et al. -1000/500 2 Model Ensemble - 0.636 0.802 0.714
Umarov et al. -200/400 2 Model Ensemble - 0.769 0.755 0.762
Naive Model -500/500 Single Model 0.858 0.877 0.772 0.708
With Pre-Training -500/500 Single Model 0.888 0.90 0.824 0.770
With Pre-Training and Fine Tuning -500/500 Single Model 0.892 0.877 0.865 0.778

Data generation described in notebook

Notebook Directory

Other Bacterial Promoters

This table shows results on data from Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks. These results show how CNN based methods can sometimes perform better when training on small datasets.

Method Organism Training Examples Accuracy Precision Recall Correlation Coefficient Specificity
Kh et al. E. coli 2936 - - 0.90 0.84 0.96
Genomic-ULMFiT E. coli 2936 0.956 0.917 0.880 0.871 0.977
Kh et al. B. subtilis 1050 - - 0.91 0.86 0.95
Genomic-ULMFiT B. subtilis 1050 0.905 0.857 0.789 0.759 0.95

Data Source

Notebook Directory

Metaganomics Classification

Genomic-ULMFiT shows improved performance on the metagenomics taxonomic dataset from Deep learning models for bacteria taxonomic classification of metagenomic data.

Method Data Source Accuracy Precision Recall F1
Fiannaca et al. Amplicon .9137 .9162 .9137 .9126
Genomic-ULMFiT Amplicon .9239 .9402 .9332 .9306
Fiannaca et al. Shotgun .8550 .8570 .8520 .8511
Genomic-ULMFiT Shotgun .8797 .8824 .8769 .8758

Data Source

Notebook Directory

Enhancer Classification

When trained on a dataset of mammalian enhancer sequences from Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences, Genomic_ULMFiT improves on results from Cohn et al.

Model/ROC-AUC Human Mouse Dog Opossum
Cohn et al. 0.80 0.78 0.77 0.72
Genomic-ULMFiT 5-mer Stride 2 0.812 0.871 0.773 0.787
Genomic-ULMFiT 4-mer Stride 2 0.804 0.876 0.771 0.786
Genomic-ULMFiT 3-mer Stride 1 0.819 0.875 0.788 0.798

Data Source

Notebook Directory

mRNA/lncRNA Classification

This table shows results for training a classification model on a dataset of coding mRNA sequences and long noncoding RNA (lncRNA) sequences. The dataset comes from A deep recurrent neural network discovers complex biological rules to decipher RNA protein-coding potential by Hill et al. The dataset contains two test sets - a standard test set and a challenge test set.

Model Test Set Accuracy Specificity Sensitivity Precision MCC
GRU Ensemble (Hill et al.)* Standard Test Set 0.96 0.97 0.95 0.97 0.92
Genomic ULMFiT (3mer stride 1) Standard Test Set 0.963 0.952 0.974 0.953 0.926
GRU Ensemble (Hill et al.)* Challenge Test Set 0.875 0.95 0.80 0.95 0.75
Genomic ULMFiT (3mer stride 1) Challenge Test Set 0.90 0.944 0.871 0.939 0.817

(*) Hill et al. presented their results as a plot rather than as a data table. Values in the above table are estimated by reading off the plot

Data Source

Notebook Directory

Interpreting Results

One way to gain insight into how the classification model makes decisions is to perturb regions of a given input sequence to see how changing different regions of the sequence impact the classification result. This allows us to create plots like the one below, highlighting important sequence regions for classification. In the plot below, the red line corresponds to a true transcription start site. The plot shows how prediction results are sensitive to changes around that location. More detail on interpretations can be found in the Model Interpretations directory.

Long Sequence Inference

Inference on long, unlabeled sequences can be done by breaking the input sequence into chunks and plotting prediction results as a function of length. The image below shows a sample prediction of promoter locations on a 40,000 bp region of the E. coli genome. True promoter locations are shown in red. More detail can be found in this notebook

Relevant Literature

For a comparison to other published methods, see Section 6 of the Methods notebook. Here are some relevant papers in the deep genomics classification space.

DeepCRISPR: optimized CRISPR guide RNA design by deep learning

Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

PromID: human promoter prediction by deep learning

Deep Learning for Genomics: A Concise Overview

Prediction of deleterious mutations in coding regions of mammals with transfer learning

Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences

PEDLA: predicting enhancers with a deep learning-based algorithmic framework

Predicting enhancers with deep convolutional neural networks

BiRen: predicting enhancers with a deep-learning-based model using the DNA sequence alone

Deep learning models for bacteria taxonomic classification of metagenomic data

Prediction of enhancer-promoter interactions via natural language processing

A deep recurrent neural network discovers complex biological rules to decipher RNA protein-coding potential

Recurrent Neural Network for Predicting Transcription Factor Binding Sites

Learning the Language of the Genome using RNNs

Owner
Karl
Interested in anything related to deep learning, biotech, energy, materials
Karl
Deep Residual Networks with 1K Layers

Deep Residual Networks with 1K Layers By Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Microsoft Research Asia (MSRA). Table of Contents Introduc

Kaiming He 856 Jan 06, 2023
Official repository for MixFaceNets: Extremely Efficient Face Recognition Networks

MixFaceNets This is the official repository of the paper: MixFaceNets: Extremely Efficient Face Recognition Networks. (Accepted in IJCB2021) https://i

Fadi Boutros 51 Dec 13, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
Fuwa-http - The http client implementation for the fuwa eco-system

Fuwa HTTP The HTTP client implementation for the fuwa eco-system Example import

Fuwa 2 Feb 16, 2022
Stitch it in Time: GAN-Based Facial Editing of Real Videos

STIT - Stitch it in Time [Project Page] Stitch it in Time: GAN-Based Facial Edit

1.1k Jan 04, 2023
Learning Visual Words for Weakly-Supervised Semantic Segmentation

[IJCAI 2021] Learning Visual Words for Weakly-Supervised Semantic Segmentation Implementation of IJCAI 2021 paper Learning Visual Words for Weakly-Sup

Lixiang Ru 24 Oct 05, 2022
Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker This repository contai

Nikita 12 Dec 14, 2022
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

44 Dec 12, 2022
Code for 'Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning', ICCV 2021

CMIC-Retrieval Code for Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning. ICCV 2021. Introduction In this wo

42 Nov 17, 2022
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Memory Compressed Attention Implementation of the Self-Attention layer of the proposed Memory-Compressed Attention, in Pytorch. This repository offers

Phil Wang 47 Dec 23, 2022
Biomarker identification for COVID-19 Severity in BALF cells Single-cell RNA-seq data

scBALF Covid-19 dataset Analysis Here is the Github page that has the codes for the bioinformatics pipeline described in the paper COVID-Datathon: Bio

Nami Niyakan 2 May 21, 2022
The full training script for Enformer (Tensorflow Sonnet) on TPU clusters

Enformer TPU training script (wip) The full training script for Enformer (Tensorflow Sonnet) on TPU clusters, in an effort to migrate the model to pyt

Phil Wang 10 Oct 19, 2022
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

10 Aug 24, 2022
Image Segmentation using U-Net, U-Net with skip connections and M-Net architectures

Brain-Image-Segmentation Segmentation of brain tissues in MRI image has a number of applications in diagnosis, surgical planning, and treatment of bra

Angad Bajwa 8 Oct 27, 2022
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Paul Treanor 12 Jan 10, 2022
Automatic learning-rate scheduler

AutoLRS This is the PyTorch code implementation for the paper AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly published

Yuchen Jin 33 Nov 18, 2022
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
A library for building and serving multi-node distributed faiss indices.

About Distributed faiss index service. A lightweight library that lets you work with FAISS indexes which don't fit into a single server memory. It fol

Meta Research 170 Dec 30, 2022