AptaMat is a simple script which aims to measure differences between DNA or RNA secondary structures.

Overview

AptaMAT

Purpose

AptaMat is a simple script which aims to measure differences between DNA or RNA secondary structures. The method is based on the comparison of the matrices representing the two secondary structures to analyze, assimilable to dotplots. The dot-bracket notation of the structure is converted in a half binary matrix showing width equal to structure's length. Each matrix case (i,j) is filled with '1' if the nucleotide in position i is paired with the nucleotide in position j, with '0' otherwise.

The differences between matrices is calculated by applying Manhattan distance on each point in the template matrix against all the points from the compared matrix. This calculation is repeated between compared matrix and template matrix to handle all the differences. Both calculation are then sum up and divided by the sum of all the points in both matrices.

Dependencies

AptaMat have been written in Python 3.8+

Two Python modules are needed :

These can be installed by typing in the command prompt either :

./setup

or

pip install numpy
pip install scipy

Use of Anaconda is highly recommended.

Usage

AptaMat is a flexible Python script which can take several arguments:

  • structures followed by secondary structures written in dotbracket format
  • files followed by path to formatted files containing one, or several secondary structures in dotbracket format

Both structures and files are independent functions in the script and cannot be called at the same time.

usage: AptaMAT.py [-h] [-structures STRUCTURES [STRUCTURES ...]] [-files FILES [FILES ...]] 

The structures argument must be a string formatted secondary structures. The first input structure is the template structure for the comparison. The following input are the compared structures. There are no input limitations. Quotes are necessary.

usage: AptaMat.py structures [-h] "struct_1" "struct_2" ["struct_n" ...]

The files argument must be a formatted file. Multiple files can be parsed. The first structure encountered during the parsing is used as the template structure. The others are the compared structures.

usage: AptaMat.py -files [-h] struct_file_1 [struct_file_n ...]

The input must be a text file, containing at least secondary structures, and accept additional information such as Title, Sequence or Structure index. If several files are provided, the function parses the files one by one and always takes the first structure encountered as the template structure. Files must be formatted as follows:

>5HRU
TCGATTGGATTGTGCCGGAAGTGCTGGCTCGA
--Template--
((((.........(((((.....)))))))))
--Compared--
.........(((.(((((.....))))).)))

Examples

structures function

First introducing a simple example with 2 structures:

AptaMat : 0.08 ">
$ AptaMat.py -structures "(((...)))" "((.....))"
 (((...)))
 ((.....))
> AptaMat : 0.08

Then, it is possible to input several structures:

AptaMat : 0.08 (((...))) .(.....). > AptaMat : 0.2 (((...))) (.......) > AptaMat : 0.3 ">
$ AptaMat.py -structures "(((...)))" "((.....))" ".(.....)." "(.......)"
 (((...)))
 ((.....))
> AptaMat : 0.08

 (((...)))
 .(.....).
> AptaMat : 0.2

 (((...)))
 (.......)
> AptaMat : 0.3

files function

Taking the above file example:

$ AptaMat.py -files example.fa
5HRU
Template - Compared
 ((((.........(((((.....)))))))))
 .........(((.(((((.....))))).)))
> AptaMat : 0.1134453781512605

Note

Compared structures need to have the same length as the Template structure.

For the moment, no features have been included to check whether the base pair is able to exist or not, according to literature. You must be careful about the sequence input and the base pairing associate.

The script accepts the extended dotbracket notation useful to compare pseudoknots or Tetrad. However, the resulting distance might not be accurate.

You might also like...
The Spark Challenge Student Check-In/Out Tracking Script

The Spark Challenge Student Check-In/Out Tracking Script This Python Script uses the Student ID Database to match the entries with the ID Card Swipe a

Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Flenser is a simple, minimal, automated exploratory data analysis tool.

Flenser Have you ever been handed a dataset you've never seen before? Flenser is a simple, minimal, automated exploratory data analysis tool. It runs

Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video.
Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video.

Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video. You can chose the cha

WithPipe is a simple utility for functional piping in Python.

A utility for functional piping in Python that allows you to access any function in any scope as a partial.

Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment
Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment

Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment Brief explanation of PT Bukalapak.com Tbk Bukalapak was found

My first Python project is a simple Mad Libs program.
My first Python project is a simple Mad Libs program.

Python CLI Mad Libs Game My first Python project is a simple Mad Libs program. Mad Libs is a phrasal template word game created by Leonard Stern and R

simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

Generates a simple report about the current Covid-19 cases and deaths in Malaysia

Generates a simple report about the current Covid-19 cases and deaths in Malaysia. Results are delay one day, data provided by the Ministry of Health Malaysia Covid-19 public data.

Comments
  • Allow comparison with not folded secondary structure

    Allow comparison with not folded secondary structure

    User may want to perform quantitative analysis and attribute distance to non folded oligonucleotides against folded anyway for example in pipeline. Different solution can be considered:

    • Give a default distance value to unfolded vs folded structure (worst solution)
    • Distance must be equal to the maximum number of base pair observable : len(structrure)//2. Several issues could arise from this:
      • How to manage with enhancement #7 ? Take the largest ? Shortest ?
      • It would give abnormally high distance value and will remains constistent even though different structure folding are compared to the same unfolded structure. Considering our main advantage over others algorithm, failed to rank at this point is not good.
    • Assign Manhattan Distance for each point in matrix ( the one showing folding) the farthest theoretical + 1 in the structure. This may give a large distance between the two structures no matter the size and the + 1 prevent an equality one distance with an actually folded structure showing the same coordinate than the farthest theoretical point. Moreover, we can obtain different score when comparing different folding to the same unfolded structure.
    enhancement 
    opened by GitHuBinet 0
  • Different length support and optimal alignment

    Different length support and optimal alignment

    Allow different structure length alignment. This would surely needs an optimal structure alignment to make AptaMat distance the lowest for a shared motif. Maybe we should consider the missing bases in the score calculation.

    enhancement 
    opened by GitHuBinet 0
  • Is the algorithm time consuming ?

    Is the algorithm time consuming ?

    Considering the expected structure size (less than 100n) the calculation run quite fast. However, theoretically the calculation can takes time when the structure is larger with complexity around log(n^2). Possible improvement can be considered as this time complexity is linked with the double browsing of dotbracket input

    • [ ] Think about the possibility of improving this bracket search.
    • [ ] Study the .ct notation for ssNA secondary structure (see in ".ct notation" enhancement)
    • [x] #6
    • [ ] Test the algorithm with this new feature
    question 
    opened by GEC-git 0
  • G-quadruplex/pseudoknot comprehension

    G-quadruplex/pseudoknot comprehension

    Add features with G-quadruplex and pseudoknot comprehension. This kind of secondary structures requires extended dotbracket notation. https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/rna_structure_notations.html

    The '([{<' & string.ascii_uppercase is already included but some doubt remain about the comparison accuracy because no test have been done on this kind of secondary structure

    • [ ] Perform some try on Q-quadruplex & pseudoknots and conclude about comparison reliability. /!\ The complexity comes from the G-quadruplex structures. The tetrad can form base pair in many different way and some secondary structure notation can be similar. Here is an exemple of case with the same interacting Guanine GGTTGGTGTGGTTGG ([..[)...(]..]) ((..)(...)(..))
    • [x] #5
    enhancement invalid 
    opened by GEC-git 0
Releases(v0.9-pre-release)
  • v0.9-pre-release(Oct 28, 2022)

    Pre-release content

    https://github.com/GEC-git/AptaMat

    • Create LICENSE by @GEC-git in https://github.com/GEC-git/AptaMat/pull/2
    • main script AptaMat.py
    • README.MD edited and published
    • Beta AptaMat logo edited and published

    Contributors

    • @GEC-git contributed in https://github.com/GEC-git/AptaMat
    • @GitHuBinet contributed in https://github.com/GEC-git/AptaMat

    Full Changelog: https://github.com/GEC-git/AptaMat/commits/v0.9-pre-release

    Source code(tar.gz)
    Source code(zip)
Owner
GEC UTC
We are the "Genie Enzymatique et Cellulaire" CNRS UMR 7025 research unit.
GEC UTC
A powerful data analysis package based on mathematical step functions. Strongly aligned with pandas.

The leading use-case for the staircase package is for the creation and analysis of step functions. Pretty exciting huh. But don't hit the close button

48 Dec 21, 2022
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Emmanuel Boateng Sifah 1 Jan 19, 2022
Processo de ETL (extração, transformação, carregamento) realizado pela equipe no projeto final do curso da Soul Code Academy.

Processo de ETL (extração, transformação, carregamento) realizado pela equipe no projeto final do curso da Soul Code Academy.

Débora Mendes de Azevedo 1 Feb 03, 2022
X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

Nguyễn Quang Huy 5 Sep 28, 2022
Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database

Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database, using a set of "harvesters", whose job it

Battery Intelligence Lab 20 Sep 28, 2022
Demonstrate a Dataflow pipeline that saves data from an API into BigQuery table

Overview dataflow-mvp provides a basic example pipeline that pulls data from an API and writes it to a BigQuery table using GCP's Dataflow (i.e., Apac

Chris Carbonell 1 Dec 03, 2021
This module is used to create Convolutional AutoEncoders for Variational Data Assimilation

VarDACAE This module is used to create Convolutional AutoEncoders for Variational Data Assimilation. A user can define, create and train an AE for Dat

Julian Mack 23 Dec 16, 2022
A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

ChatNoir 24 Nov 29, 2022
Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Teo Calvo 5 Apr 26, 2022
pyETT: Python library for Eleven VR Table Tennis data

pyETT: Python library for Eleven VR Table Tennis data Documentation Documentation for pyETT is located at https://pyett.readthedocs.io/. Installation

Tharsis Souza 5 Nov 19, 2022
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Trung-Duy Nguyen 27 Nov 01, 2022
Investigating EV charging data

Investigating EV charging data Introduction: Got an opportunity to work with a home monitoring technology company over the last 6 months whose goal wa

Yash 2 Apr 07, 2022
A meta plugin for processing timelapse data timepoint by timepoint in napari

napari-time-slicer A meta plugin for processing timelapse data timepoint by timepoint. It enables a list of napari plugins to process 2D+t or 3D+t dat

Robert Haase 2 Oct 13, 2022
An interactive grid for sorting, filtering, and editing DataFrames in Jupyter notebooks

qgrid Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your D

Quantopian, Inc. 2.9k Jan 08, 2023
Used for data processing in machine learning, and help us to construct ML model more easily from scratch

Used for data processing in machine learning, and help us to construct ML model more easily from scratch. Can be used in linear model, logistic regression model, and decision tree.

ShawnWang 0 Jul 05, 2022
Advanced Pandas Vault — Utilities, Functions and Snippets (by @firmai).

PandasVault ⁠— Advanced Pandas Functions and Code Snippets The only Pandas utility package you would ever need. It has no exotic external dependencies

Derek Snow 374 Jan 07, 2023
Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance companies

Insurance-Fraud-Claims Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance com

1 Jan 27, 2022
WithPipe is a simple utility for functional piping in Python.

A utility for functional piping in Python that allows you to access any function in any scope as a partial.

Michael Milton 1 Oct 26, 2021
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022