TUPÃ was developed to analyze electric field properties in molecular simulations

Related tags

Deep Learningtupa
Overview

Twitter Follow

TUPÃ: Electric field analyses for molecular simulations

alt text

What is TUPÃ?

TUPÃ (pronounced as tu-pan) is a python algorithm that employs MDAnalysis engine to calculate electric fields at any point inside the simulation box throughout MD trajectories. TUPÃ also includes a PyMOL plugin to visualize electric field vectors together with molecules.

Required packages:

  • MDAnalysis >= 1.0.0
  • Python >= 3.x
  • Numpy >= 1.2.x

Installation instructions

First, make sure you have all required packages installed. For MDAnalysis installation procedures, click here.

After, just clone this repository into a folder of your choice:

git clone https://github.com/mdpoleto/tupa.git

To use TUPÃ easily, copy the directory pathway to TUPÃ folder and include an alias in your ~/.bashrc:

alias tupa="python /path/to/the/cloned/repository/TUPA.py"

To install the PyMOL plugin, open PyMOL > Plugin Manager and click on "Install New Plugin" tab. Load the TUPÃ plugin and use it via command-line within PyMOL. To usage instructions, read our FAQ.

TUPÃ Usage

TUPÃ calculations are based on parameters that are provided via a configuration file, which can be obtained via the command:

tupa -template config.conf

The configuration file usually contains:

[Environment Selection]
sele_environment      = (string)             [default: None]

[Probe Selection]
mode                = (string)             [default: None]
selatom             = (string)             [default: None]
selbond1            = (string)             [default: None]
selbond2            = (string)             [default: None]
targetcoordinate    = [float,float,float]  [default: None]
remove_self         = (True/False)         [default: False]
remove_cutoff       = (float)              [default: 1 A ]

[Solvent]
include_solvent     = (True/False)         [default: False]
solvent_cutoff      = (float)              [default: 10 A]
solvent_selection   = (string)             [default: None]

[Time]
dt                  = (integer)            [default: 1]

A complete explanation of each option in the configuration file is available via the command:

tupa -h

TUPÃ has 3 calculations MODES:

  • In ATOM mode, the coordinate of one atom will be tracked throughout the trajectory to serve as target point. If more than 1 atom is provided in the selection, the center of geometry (COG) is used as target position. An example is provided HERE.

  • In BOND mode, the midpoint between 2 atoms will be tracked throughout the trajectory to serve as target point. In this mode, the bond axis is used to calculate electric field alignment. By default, the bond axis is define as selbond1 ---> selbond2. An example is provided HERE.

  • In COORDINATE mode, a list of [X,Y,Z] coordinates will serve as target point in all trajectory frames. An example is provided HERE.

IMPORTANT:

  • All selections must be compatible with MDAnalysis syntax.
  • TUPÃ does not handle PBC images yet! Trajectories MUST be re-imaged before running TUPÃ.
  • Solvent molecules in PBC images are selected if within the cutoff. This is achieved by applying the around selection feature in MDAnalysis.
  • TUPÃ does not account for Particle Mesh Ewald (PME) electrostatic contributions! To minimize such effects, center your target as well as possible.
  • If using COORDINATE mode, make sure your trajectory has no translations and rotations. Our code does not account for rotations and translations.

TUPÃ PyMOL Plugin (pyTUPÃ)

To install pyTUPÃ plugin in PyMOL, click on Plugin > Plugin Manager and then "Install New Plugin" tab. Choose the pyTUPÃ.py file and click Install.

Our plugin has 3 functions that can be called via command line within PyMOL:

  • efield_point: create a vector at a given atom or set of coordinates.
efield_point segid LIG and name O1, efield=[-117.9143, 150.3252, 86.5553], scale=0.01, color="red", name="efield_OG"
  • efield_bond: create a vector midway between 2 selected atoms.
efield_point resname LIG and name O1, resname LIG and name C1, efield=[-94.2675, -9.6722, 58.2067], scale=0.01, color="blue", name="efield_OG-C1"
  • draw_bond_axis: create a vector representing the axis between 2 atoms.
draw_bond_axis resname LIG and name O1, resname LIG and name C1, gap=0.5, color="gray60", name="axis_OG-C1"

Citing TUPÃ

If you use TUPÃ in a scientific publication, we would appreciate citations to the following paper:

Marcelo D. Polêto, Justin A. Lemkul. TUPÃ: Electric field analysis for molecular simulations, 2022.

Bibtex entry:

@article{TUPÃ2022,
    author = {Pol\^{e}to, M D and Lemkul, J A},
    title = "{TUPÃ : Electric field analyses for molecular simulations}",
    journal = {},
    year = {},
    month = {},
    issn = {},
    doi = {},
    url = {},
    note = {},
    eprint = {},
}

Why TUPÃ?

In the Brazilian folklore, Tupã is considered a "manifestation of God in the form of thunder". To know more, refer to this.

Contact information

E-mail: [email protected] / [email protected]

You might also like...
Differentiable molecular simulation of proteins with a coarse-grained potential

Differentiable molecular simulation of proteins with a coarse-grained potential This repository contains the learned potential, simulation scripts and

Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

MolRep: A Deep Representation Learning Library for Molecular Property Prediction
MolRep: A Deep Representation Learning Library for Molecular Property Prediction

MolRep: A Deep Representation Learning Library for Molecular Property Prediction Summary MolRep is a Python package for fairly measuring algorithmic p

Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).
Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).

[PDF] | [Slides] The official implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021 Long talk) Installation Inst

Kaggle | 9th place (part of) solution for the Bristol-Myers Squibb – Molecular Translation challenge

Part of the 9th place solution for the Bristol-Myers Squibb – Molecular Translation challenge translating images containing chemical structures into I

source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

Fast and scalable uncertainty quantification for neural molecular property prediction, accelerated optimization, and guided virtual screening.
Fast and scalable uncertainty quantification for neural molecular property prediction, accelerated optimization, and guided virtual screening.

Evidential Deep Learning for Guided Molecular Property Prediction and Discovery Ava Soleimany*, Alexander Amini*, Samuel Goldman*, Daniela Rus, Sangee

Code for the paper
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Comments
  • 1.4.0 branch

    1.4.0 branch

    TUPÃ update (Aug 03 2022):

    • Empty environment selection now issues an error.
      
    • Empty probe selection now issues an error.
      
    • Improved Help/Usage 
      
    • Configuration file examples are based on common syntax
      
    opened by mdpoleto 0
  • 1.3.0 branch

    1.3.0 branch

    TUPÃ update (Jun 21 2022):

    • -dumptime now accepts multiple entries
    • Add average and standard deviation values at the end of ElecField_proj_onto_bond.dat and ElecField_alignment.dat
    • Add Angle column in ElecField_alignment.dat with the average angle between Efield(t) and bond axis.
    • Fix documentation issues/typos.
    opened by mdpoleto 0
Releases(v1.4.0)
  • v1.4.0(Aug 3, 2022)

    TUPÃ update (Aug 03 2022):

    • Empty environment selection now issues an error.
      
    • Empty probe selection now issues an error.
      
    • Improved Help/Usage 
      
    • Configuration file examples are based on common syntax
      
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Jun 22, 2022)

    TUPÃ update (Jun 21 2022):

    • -dumptime now accepts multiple entries
    • Add average and standard deviation values at the end of ElecField_proj_onto_bond.dat and ElecField_alignment.dat
    • Add Angle column in ElecField_alignment.dat with the average angle between Efield(t) and bond axis.
    • Fix documentation issues/typos.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Apr 18, 2022)

    TUPÃ update (Apr 18 2022):

    • Make -dump now writes the entire system instead of just the environment selection.
    • Add field average and standard deviation values at the end of ElecField.dat
    • Fix documentation issues/typos.
    • Update paper metadata
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Mar 23, 2022)

    TUPÃ update (Mar 22 2022):

    • Inclusion of LIST mode: TUPÃ reads a file containing XYZ coordinates that will be used as the probe position. Useful for binding sites or other pockets.
    • Fix documentation issues/typos.

    pyTUPÃ update (Mar 22 2022):

    • Support for a 3D representation of electric field standard deviation as a truncated cone that involves the electric field arrow.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Feb 9, 2022)

    TUPÃ first release (Feb 13 2022):

    • Calculation modes available: ATOM, BOND, COORDINATE
    • Support for triclinic simulation boxes only.
    • PBC support is limited to triclinic boxes. Future versions are expected to handle PBC corrections.
    • Removal of "self-contributions" are available to the COORDINATE mode only.
    • Users can dump a specific frame as a .pdb file. Futures versions are expected to allow the extraction of the environment set coordinates.
    • Residue contributions are calculated.

    pyTUPÃ first release (Feb 13 2022):

    • Support for draw_bond, efield_bond and efield_point.
    • EField vectors can be scaled up/down
    Source code(tar.gz)
    Source code(zip)
Owner
Marcelo D. Polêto
Marcelo D. Polêto
The official codes for the ICCV2021 Oral presentation "Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework"

P2PNet (ICCV2021 Oral Presentation) This repository contains codes for the official implementation in PyTorch of P2PNet as described in Rethinking Cou

Tencent YouTu Research 208 Dec 26, 2022
Invert and perturb GAN images for test-time ensembling

GAN Ensembling Project Page | Paper | Bibtex Ensembling with Deep Generative Views. Lucy Chai, Jun-Yan Zhu, Eli Shechtman, Phillip Isola, Richard Zhan

Lucy Chai 93 Dec 08, 2022
A curated list of awesome game datasets, and tools to artificial intelligence in games

🎮 Awesome Game Datasets In computer science, Artificial Intelligence (AI) is intelligence demonstrated by machines. Its definition, AI research as th

Leonardo Mauro 454 Jan 03, 2023
A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.

Visdom A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Python. Overview Concepts Setup Usage API To

FOSSASIA 9.4k Jan 07, 2023
A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.

torchsynth The fastest synth in the universe. Introduction torchsynth is based upon traditional modular synthesis written in pytorch. It is GPU-option

torchsynth 229 Jan 02, 2023
Unofficial keras(tensorflow) implementation of MAE model from Masked Autoencoders Are Scalable Vision Learners

MAE-keras Unofficial keras(tensorflow) implementation of MAE model described in 'Masked Autoencoders Are Scalable Vision Learners'. This work has been

Yewon 11 Jun 12, 2022
A new video text spotting framework with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 67 Jan 03, 2023
Code for generating the figures in the paper "Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?"

Code for running simulations for the paper "Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Lin

Matthew Farrell 1 Nov 22, 2022
a basic code repository for basic task in CV(classification,detection,segmentation)

basic_cv a basic code repository for basic task in CV(classification,detection,segmentation,tracking) classification generate dataset train predict de

1 Oct 15, 2021
A PyTorch implementation of "TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?"

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? Source: Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Caiyong Wang 14 Sep 20, 2022
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
Toolbox to analyze temporal context invariance of deep neural networks

PyTCI A toolbox that estimates the integration window of a sensory response using the "Temporal Context Invariance" paradigm (TCI). The TCI method Int

4 Oct 23, 2022
OMNIVORE is a single vision model for many different visual modalities

Omnivore: A Single Model for Many Visual Modalities [paper][website] OMNIVORE is a single vision model for many different visual modalities. It learns

Meta Research 451 Dec 27, 2022
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Implementation of SegNet: A Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-Wise Labelling

Caffe SegNet This is a modified version of Caffe which supports the SegNet architecture As described in SegNet: A Deep Convolutional Encoder-Decoder A

Alex Kendall 1.1k Jan 02, 2023
Unimodal Face Classification with Multimodal Training

Unimodal Face Classification with Multimodal Training This is a PyTorch implementation of the following paper: Unimodal Face Classification with Multi

Wenbin Teng 3 Jul 06, 2022
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
Source code for deep symbolic optimization.

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. Th

Brenden Petersen 290 Dec 25, 2022
This is the official implementation of our proposed SwinMR

SwinMR This is the official implementation of our proposed SwinMR: Swin Transformer for Fast MRI Please cite: @article{huang2022swin, title={Swi

A Yang Lab (led by Dr Guang Yang) 27 Nov 17, 2022
Google AI Open Images - Object Detection Track: Open Solution

Google AI Open Images - Object Detection Track: Open Solution This is an open solution to the Google AI Open Images - Object Detection Track 😃 More c

minerva.ml 46 Jun 22, 2022