A set of functions and analysis classes for solvation structure analysis

Overview

SolvationAnalysis

Powered by NumFOCUS Powered by MDAnalysis GitHub Actions Status TravisCI codecov docs


The macroscopic behavior of a liquid is determined by its microscopic structure. For ionic systems, like batteries and many enzymes, the solvation environment surrounding ions is especially important. By studying the solvation of interesting materials, scientists can better understand, engineer, and design new technologies. The aim of this project is to implement a robust and cohesive set of methods for solvation analysis that would be widely useful in both biomolecular and battery electrolyte simulations. The core of the solvation module will be a set of functions for easily working with ionic solvation shells. Building from that core functionality, the module will implement several analysis methods for analyzing ion pairing, ion speciation, residence times, and shell association and dissociation.

Main development by @orioncohen, with mentorship from @richardjgowers, @IAlibay, and @hmacdope.

Acknowledgements

Project based on the Computational Molecular Science Python Cookiecutter version 1.5.

Comments
  • analysis functions

    analysis functions

    Description

    This PR will establish the analysis functions for the solvation module. Linked to issues #11, #21, and #28.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] create ion speciation class
    • [x] create coordination number class
    • [x] create solvation god class
    • [x] update documentation
    • [x] finalize data structure
    • [x] complete implementation
    • [x] finalize in-line documentation

    Status

    • [x] Ready to go
    core high-priority 
    opened by orionarcher 34
  • Parse RDF's and find minima

    Parse RDF's and find minima

    Description

    Provide a brief description of the PR's purpose here.

    Todos

    • [x] add test data
    • [x] functionality to interpolate rdfs
    • [x] functionality to find minima
    • [x] refine minima finding
    • [x] solidify unit testing

    Status

    • [x] Ready to go
    core testing high-priority 
    opened by orionarcher 20
  • Basic testing with Pytest

    Basic testing with Pytest

    Description

    Set up basic testing for current implemented functions.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] get_radial_shell testing
    • [x] get_closest_n_mol testing
    • [x] slightly rework get_closest_n_mol signature

    Status

    • [x] Ready to go
    core testing 
    opened by orionarcher 17
  • tutorials

    tutorials

    Description

    Create a convenient tutorial for users to learn solvation_analysis

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] jupyter tutorial for users
    • [x] set up jupyter edit requests

    Status

    • [x] Ready to go
    documentation high-priority 
    opened by orionarcher 12
  • Decide hierarchy of analysis classes/methods

    Decide hierarchy of analysis classes/methods

    Some of our proposed functionality follows on directly from other functionality and some does not.

    This means we need to discuss how to compartmentalise functionality into classes and methods many of which will subclass analysis base.

    I thought I would make this issue as an open discussion area for how we want to compartmentalise /hierarchicalize our functionality. I think the best way to proceed is if @orioncohen can lay out how he sees it and then @richardjgowers @IAlibay and any @MDAnalysis/gsoc-mentors can weigh in. We may also need to meet to discuss this. :)

    question core high-priority 
    opened by hmacdope 11
  • add visualization tutorial

    add visualization tutorial

    Description

    Since nglview is causing issues and I don't want to get bogged down before the release, I wanted to put this in a separate thread.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] tutorial introduction
    • [x] fix nglview issues
    • [x] finish tutorial

    Questions

    • [x] Why is nglview showing too many bonds on molecules?

    Status

    • [x] Ready to go
    bug documentation enhancement 
    opened by orionarcher 10
  • Initial functions for solvation selection + documentation + testing

    Initial functions for solvation selection + documentation + testing

    Description

    Implements basic functionality for solvation selection. Currently implements get_atom_group, get_n_shells, get_closest_n_mol, and get_radial_shell. Addresses issue #5

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] implement basic functionality for solvation selection
    • [x] documentation for functions
    • [x] unit tests for basic functionality

    Questions

    • [x] what doc style should be used?

    Status

    • [x] ready to go
    opened by orionarcher 10
  • Support for multi atom solutes

    Support for multi atom solutes

    Description

    This PR was split off from PR #70 to better separate quality-of-life changes from API changes. See that PR for some history and discussion.

    This is intended to be a major PR to handle multi-atom solutes. It relates to issues #47, #66, and #58.

    I would like to propose the following outline of new functionality. In this description, I will focus on the outward facing API. I'll use the somewhat trivial case of water as an example.

    1. Solution will be renamed to Solute. All references to solute in the current documentation will be renamed to solvated atom or solvated atoms. I think this better captures what the Solute class really is, especially as we expand to multi-atom Solutes.

    2. The default initializer for Solute will take only a single atom per residue. It will not support multiple identical atoms on a residue. This will be handled by the more general case. As a result, instantiating a Solute for a single atom remains the same.

    water = u.select_atoms(...)
    water_O = u.select_atoms(...)
    water_O_solute = Solute(water_O, {"water": water})
    
    1. Additional initializers will be added to instantiate a Solute, these will support multi-atom solutes.

    The first will allow the user to stitch together multiple solutes to create a new solute.

    water_H1 = u.select_atoms(...)
    water_H2 = u.select_atoms(...)
    solute_O = Solute(water_O, {"water": water})
    solute_H1 = Solute(water_H1, {"water": water})
    solute_H2 = Solute(water_H2, {"water": water})
    
    multi_atom_solute = Solute.from_solutes([solute_O, solute_H1, solute_H2])  # maybe this should be a dict?
    

    The second will allow users to simply instantiate a solute from an entire residue (or part of a residue). There may be technical challenges here so this behavior is not guaranteed.

    multi_atom_solute = Solute.from_residue(water, {"water": water})
    
    1. To support this, the solvation_data dataframe will have two additional columns added, a "residue" column and a "solute_name" column. All analysis classes will be refactored to operate on the "residue" column rather than the "solvated_atom" column. This will make no difference for single-atom solutes but will allow the analysis classes to generalize easily. I'm not completely sure the "solute_name" column is necessary, but it would be convenient to have.

    2. When a multi-atom solute is created all of the solvation_data dataframes from each constituent single-atom solute will be merged together. The "residue" column will group together solvated atoms on the same residue such that the analysis classes can operate on the whole solute. The API for accessing the residence classes will be identical.

    multi_atom_solute.coordination_number["water"]   # valid property
    
    1. We will retain all of the single atom Solutes as a property of the multi-atom Solute. This would amount to a rough doubling of the memory footprint, but it would make follow up analysis easier. I'm a bit torn here and there may be a better way.
    >>> print(water.atoms)  # what should this be called?
    >>> [solute_O, solute_H1, solute_H2]  # maybe this should be a dict?
    

    For a single atom solute the atoms list would still be present but the data within would be identical to the solvation_data of the solute itself. Single atom solutes are now just a special case of multi-atom solutes.

    water_O_solute.atoms[0].solvation_data = water_O_solute.solvation_data
    

    I'm sure there are many things I am not considering that will come up later, but as a start, I think this plan will allow the package to be generalized with maximum code reuse. I'd love feedback or suggestions on any aspect of the outline above.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] make solutions composable so that multi-atom solutes can be constructed systematically
    • [x] put guardrails in place to prevent misuse
    • [ ] bare minimum rewrite of the documentation

    Status

    • [ ] Ready to go
    enhancement core 
    opened by orionarcher 8
  • Residence times and solute-solvent network calculations

    Residence times and solute-solvent network calculations

    Description

    This PR adds an analysis module to calculate residence times and an analysis module to calculate solute-solvent networks.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] Residence module
    • [x] Change exponential fit in residence time calculations to 1/e decay point
    • [x] Networking module
    • [x] Residence testing
    • [x] Networking testing
    • [x] Residence documentation
    • [x] Networking documentation
    • [x] Add diluent_composition analysis to Pairing class
    • [x] Add module selection kwarg to Solution
    • [x] Add from_solution class method to all analysis classes
    • [x] Residence and Networking tutorial
    • [x] add citations for Networking and Residence codes
    • [x] improve documentation for Residence time
      • [x] specify caveats and difference between both implementations

    Status

    The PR is nearly finished, the main outstanding issues are improved documentation, citations and tutorials.

    enhancement 
    opened by orionarcher 8
  • Formalize Roadmap in Projects tab

    Formalize Roadmap in Projects tab

    We should decide on a model for a release schedule, Do we wish to match the MDA core idea and do quarterly (@IAlibay is that right?) releases? Or do we just want to do with major functionality change, ie at your discretion @orioncohen?

    If people want updated functionality and bugfixes, they can always clone the current main branch and install with pip install -e .

    Raising issue as food for thought.

    release 
    opened by hmacdope 8
  • Enhancements to support composable Solutions and self-solvating solutes

    Enhancements to support composable Solutions and self-solvating solutes

    UPDATE:

    This PR now implements a number of quality of life changes and solves issue #31. The proposed multi-atom solute changes will be implemented in another PR. See PR #72 for the new changes!

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] add new testing data with a multi-atom solute
    • [x] remove foolish internal numbering scheme of solvated_atoms
    • [x] allow solutes to also act as solvents
    • [x] replace all column name strings with variables stored in a column_names.py file

    Status

    • [x] Ready to go

    Description

    This is intended to be a major PR to handle multi-atom solutes. It relates to issues #47, #31, #66, and #58.

    I would like to propose the following outline of new functionality. In this description, I will focus on the outward facing API. I'll use the somewhat trivial case of water as an example.

    1. Solution will be renamed to Solute. All references to solute in the current documentation will be renamed to solvated atom or solvated atoms. I think this better captures what the Solute class really is, especially as we expand to multi-atom Solutes.

    2. The default initializer for Solute will take only a single atom per residue. It will not support multiple identical atoms on a residue. This will be handled by the more general case. As a result, instantiating a Solute for a single atom remains the same. (note that I have already fixed the case with self-solvation identified in issue #31)

    water = u.select_atoms(...)
    water_O = u.select_atoms(...)
    water_O_solute = Solute(water_O, {"water": water})
    
    1. Additional initializers will be added to instantiate a Solute, these will support multi-atom solutes.

    The first will allow the user to stitch together multiple solutes to create a new solute.

    water_H1 = u.select_atoms(...)
    water_H2 = u.select_atoms(...)
    solute_O = Solute(water_O, {"water": water})
    solute_H1 = Solute(water_H1, {"water": water})
    solute_H2 = Solute(water_H2, {"water": water})
    
    multi_atom_solute = Solute.from_solutes([solute_O, solute_H1, solute_H2])  # maybe this should be a dict?
    

    The second will allow users to simply instantiate a solute from an entire residue (or part of a residue). There may be technical challenges here so this behavior is not guaranteed.

    multi_atom_solute = Solute.from_residue(water, {"water": water})
    
    1. To support this, the solvation_data dataframe will have two additional columns added, a "residue" column and a "solute_name" column. All analysis classes will be refactored to operate on the "residue" column rather than the "solvated_atom" column. This will make no difference for single-atom solutes but will allow the analysis classes to generalize easily. I'm not completely sure the "solute_name" column is necessary, but it would be convenient to have.

    2. When a multi-atom solute is created all of the solvation_data dataframes from each constituent single-atom solute will be merged together. The "residue" column will group together solvated atoms on the same residue such that the analysis classes can operate on the whole solute. The API for accessing the residence classes will be identical.

    multi_atom_solute.coordination_number["water"]   # valid property
    
    1. We will retain all of the single atom Solutes as a property of the multi-atom Solute. This would amount to a rough doubling of the memory footprint, but it would make follow up analysis easier. I'm a bit torn here and there may be a better way.
    >>> print(water.atoms)  # what should this be called?
    >>> [solute_O, solute_H1, solute_H2]  # maybe this should be a dict?
    

    For a single atom solute the atoms list would still be present but the data within would be identical to the solvation_data of the solute itself. Single atom solutes are now just a special case of multi-atom solutes.

    water_O_solute.atoms[0].solvation_data = water_O_solute.solvation_data
    

    I'm sure there are many things I am not considering that will come up later, but as a start, I think this plan will allow the package to be generalized with maximum code reuse. I'd love feedback or suggestions on any aspect of the outline above.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [x] add new testing data with a multi-atom solute
    • [x] remove foolish internal numbering scheme of solvated_atoms
    • [x] allow solutes to also act as solvents
    • [x] replace all column name strings with variables stored in a column_names.py file
    • [ ] make solutions composable so that multi-atom solutes can be constructed systematically
    • [ ] put guardrails in place to prevent misuse

    Status

    • [ ] Ready to go
    enhancement core high-priority 
    opened by orionarcher 7
  • Concatenation Issue for Residence Time Calculation

    Concatenation Issue for Residence Time Calculation

    I am using Residence.from_solute(solute) to calculate the residence time between trimers' N and water' O. It always reports the same concatenation issue which seems to be caused by the calculation of auto-covariance (Fig 1). However, when I calculate the residence time between anion and water, there isn't this problem. I add the solvation_data of anion solution (Fig 2) and trimer solution for your reference (Fig 3).

    Screen Shot 2022-12-05 at 15 58 01 Screen Shot 2022-12-05 at 15 58 21Fig 1 Screen Shot 2022-12-05 at 15 53 21 Fig 2

    Screen Shot 2022-12-05 at 15 52 53Fig 3

    opened by SophiaRuan 0
  • Styling and consistency of documentation could be improved

    Styling and consistency of documentation could be improved

    This is split off from PR #78 to address two points the consistency and formatting of the documentation.

    Todos:

    • [ ] closely read over all documentation for errors
    • [ ] make sure all syntax and code styling is consistent
    • [ ] enhance aesthetic styling of documentation
    documentation 
    opened by orionarcher 0
  • Work towards MDAKit integration

    Work towards MDAKit integration

    Now that MDAKits are live to roll we should work towards registering solvation-analysis as an MDAKit!

    See the blog post for more info.

    @orionarcher I would be interested to know if you would prefer to just go for it or wait for 0.2.0 (and some conda packages).

    AFAIK solvation-analysis already meets all the requirements listed in the white paper.

    opened by hmacdope 2
  • Solvation plots

    Solvation plots

    Description

    Provide a brief description of the PR's purpose here.

    Todos

    Notable points that this PR has either accomplished or will accomplish.

    • [ ] TODO 1

    Questions

    • [ ] Question1

    Status

    • [ ] Ready to go
    opened by laurlee 1
  • Create a `save_data` method for solution

    Create a `save_data` method for solution

    This method should dump the core solvation statistics to python dict. It will not contain enough information to reconstitute the Solution.

    Brought up in issue #52.

    opened by orionarcher 0
Releases(v0.1.4)
  • v0.1.4(Jun 28, 2022)

  • v0.1.3(Mar 17, 2022)

  • v0.1.2-beta(Sep 23, 2021)

    New functionality:

    • co-occurrence matrix calculation and plotting
    • identify types of coordinating atoms
    • find percentage of free solvent

    Bug fixes:

    • all AtomGroup.ids and ResidueGroup.resids changed to AtomGroup.ix and ResidueGroup.resindices
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Sep 23, 2021)

    New functionality:

    • co-occurrence matrix calculation and plotting
    • identify types of coordinating atoms
    • find percentage of free solvent

    Bug fixes:

    • all AtomGroup.ids and ResidueGroup.resids changed to AtomGroup.ix and ResidueGroup.resindices
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1-alpha(Aug 19, 2021)

  • v0.1.1(Aug 19, 2021)

Owner
MDAnalysis
MDAnalysis is an object-oriented Python library to analyze molecular dynamics trajectories.
MDAnalysis
TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data

tedana: TE Dependent ANAlysis TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI)

136 Dec 22, 2022
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 09, 2023
Gaussian processes in TensorFlow

Website | Documentation (release) | Documentation (develop) | Glossary Table of Contents What does GPflow do? Installation Getting Started with GPflow

GPflow 1.7k Jan 06, 2023
follow-analyzer helps GitHub users analyze their following and followers relationship

follow-analyzer follow-analyzer helps GitHub users analyze their following and followers relationship by providing a report in html format which conta

Yin-Chiuan Chen 2 May 02, 2022
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023
A Python Tools to imaging the shallow seismic structure

ShallowSeismicImaging Tools to imaging the shallow seismic structure, above 10 km, based on the ZH ratio measured from the ambient seismic noise, and

Xiao Xiao 9 Aug 09, 2022
Big Data & Cloud Computing for Oceanography

DS2 Class 2022, Big Data & Cloud Computing for Oceanography Home of the 2022 ISblue Big Data & Cloud Computing for Oceanography class (IMT-A, ENSTA, I

Ocean's Big Data Mining 5 Mar 19, 2022
Python utility to extract differences between two pandas dataframes.

Python utility to extract differences between two pandas dataframes.

Jaime Valero 8 Jan 07, 2023
An implementation of the largeVis algorithm for visualizing large, high-dimensional datasets, for R

largeVis This is an implementation of the largeVis algorithm described in (https://arxiv.org/abs/1602.00370). It also incorporates: A very fast algori

336 May 25, 2022
Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Mohammed Hassan 13 Mar 31, 2022
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

Mathis HAMMEL 29 Oct 18, 2022
simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

aliaksandr-master 0 Jan 26, 2022
PandaPy has the speed of NumPy and the usability of Pandas 10x to 50x faster (by @firmai)

PandaPy "I came across PandaPy last week and have already used it in my current project. It is a fascinating Python library with a lot of potential to

Derek Snow 527 Jan 02, 2023
Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

Feng Ruohang 88 Nov 04, 2022
An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify.

An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify. The ETL process flows from AWS's S3 into staging tables in AWS Redshift.

1 Feb 11, 2022
Single-Cell Analysis in Python. Scales to >1M cells.

Scanpy – Single-Cell Analysis in Python Scanpy is a scalable toolkit for analyzing single-cell gene expression data built jointly with anndata. It inc

Theis Lab 1.4k Jan 05, 2023
Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Sebastian Schäfer 10 Dec 08, 2022
Demonstrate a Dataflow pipeline that saves data from an API into BigQuery table

Overview dataflow-mvp provides a basic example pipeline that pulls data from an API and writes it to a BigQuery table using GCP's Dataflow (i.e., Apac

Chris Carbonell 1 Dec 03, 2021
BasstatPL is a package for performing different tabulations and calculations for descriptive statistics.

BasstatPL is a package for performing different tabulations and calculations for descriptive statistics. It provides: Frequency table constr

Angel Chavez 1 Oct 31, 2021
Statistical Rethinking course winter 2022

Statistical Rethinking (2022 Edition) Instructor: Richard McElreath Lectures: Uploaded Playlist and pre-recorded, two per week Discussion: Online, F

Richard McElreath 3.9k Dec 31, 2022