healthy and lesion models for learning based on the joint estimation of stochasticity and volatility

Overview

health-lesion-stovol

healthy and lesion models for learning based on the joint estimation of stochasticity and volatility

Reference

please cite this paper if you use this code: Piray P and Daw ND, 'A model for learning based on the joint estimation of stochasticity and volatility', 2021, Nature Communications.

Description of the models

This work addresses the problem of learning in noisy environments, in which the agent must draw inferences (e.g., about true reward rates) from observations (individual reward amounts) that are corrupted by two distinct sources of noise: process noise or volatility and observation noise or stochasticity. Volatility captures the speed by which the true value being estimated changes from trial to trial (modeled as Gaussian diffusion); stochasticity describes additional measurement noise in the observation of each outcome around its true value (modeled as Gaussian noise on each trial). The celebrated Kalman filter makes inference based on known value for both stochasticity and volatility, in which volatility and stochasticity have opposite effects on the learning rate (i.e. Kalman gain): whereas volatility increases the learning rate, stochasticity decreases the learning rate.

The learning models implemented here generalize the Kalman filter by also learning both stochasticity and volatility based on observations. An important point is that inferences about volatility and stochasticity are mutually interdependent. But the details of the interdependence are themselves informative. From the learner’s perspective, a challenging problem is to distinguish volatility from stochasticity when both are unknown, because both of them increase the noisiness of observations. Disentangling their respective contributions requires trading off two opposing explanations for the pattern of observations, a process known in Bayesian probability theory as explaining away. This insight results in two lesion models: a stochasticity lesion model that tends to misidentify stochasticity as volatility and inappropriately increases learning rates; and a volatility lesion model that tends to misidentify volatility as stochasticity and inappropriately decreases learning rates.

Description of the code

learning_models.py contains two classes of learning models:

  1. LearningModel that includes the healthy model and two lesion models (stochasticity lesion and volatility lesion models)
  2. LearningModelGaussian is similar to LearningModel with the Gaussian generative processes for stochasticity and volatility diffusion.

Inference in both classes is based on a combination of particle filter and Kalman filter. Given particles for stochasticity and volatility, the Kalman filter updates its estimation of the mean and variance of the state (e.g. reward rate). The main results shown in the reference paper (see below) is very similar for both classes of generative process. The particle filter has been implemented in the particle_filter.py

sim_example.py simulates the healthy model in a 2x2 factorial design (with two different true values for both true stochasticity and volatility). The model does not know about the true values and should learn them from observations. Initial values for both stochasticity and volatility are assumed to be the mean of their corresponding true values (and so not helpful for dissociation). This is akin to Figure 2 of the reference paper.

sim_lesion_example.py also simulates the lesions models in the 2x2 factorial design described above. This is akin to Figure 3 of the reference paper.

Dependencies:

numpy (required for computations in particle_filter.py and learning_models.py) matplotlib (required for visualization in sim_example and sim_lesion_example) seaborn (required for visualization in sim_example and sim_lesion_example) pandas (required for visualization in sim_example and sim_lesion_example)

Other languages

The MATLAB implementation of the model is also available: https://github.com/payampiray/stochasticity_volatility_learning

Author

Payam Piray (ppiray [at] princeton.edu)

A collection of video resources for machine learning

Machine Learning Videos This is a collection of recorded talks at machine learning conferences, workshops, seminars, summer schools, and miscellaneous

Dustin Tran 1.5k Dec 29, 2022
Python based GBDT implementation

Py-boost: a research tool for exploring GBDTs Modern gradient boosting toolkits are very complex and are written in low-level programming languages. A

Sberbank AI Lab 20 Sep 21, 2022
Highly interpretable classifiers for scikit learn, producing easily understood decision rules instead of black box models

Highly interpretable, sklearn-compatible classifier based on decision rules This is a scikit-learn compatible wrapper for the Bayesian Rule List class

Tamas Madl 482 Nov 19, 2022
Automated Machine Learning with scikit-learn

auto-sklearn auto-sklearn is an automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator. Find the documentation here

AutoML-Freiburg-Hannover 6.7k Jan 07, 2023
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Ray provides a simple, universal API for building distributed applications. Ray is packaged with the following libraries for accelerating machine lear

23.3k Dec 31, 2022
To-Be is a machine learning challenge on CodaLab Platform about Mortality Prediction

To-Be is a machine learning challenge on CodaLab Platform about Mortality Prediction. The challenge aims to adress the problems of medical imbalanced data classification.

Marwan Mashra 1 Jan 31, 2022
BASTA: The BAyesian STellar Algorithm

BASTA: BAyesian STellar Algorithm Current stable version: v1.0 Important note: BASTA is developed for Python 3.8, but Python 3.7 should work as well.

BASTA team 16 Nov 15, 2022
A simple python program that draws a tree for incrementing values using the Collatz Conjecture.

Collatz Conjecture A simple python program that draws a tree for incrementing values using the Collatz Conjecture. Values which can be edited: Length

davidgasinski 1 Oct 28, 2021
Built on python (Mathematical straight fit line coordinates error predictor machine learning foundational model)

Sum-Square_Error-Business-Analytical-Tool- Built on python (Mathematical straight fit line coordinates error predictor machine learning foundational m

om Podey 1 Dec 03, 2021
A library of sklearn compatible categorical variable encoders

Categorical Encoding Methods A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques

2.1k Jan 07, 2023
Nixtla is an open-source time series forecasting library.

Nixtla Nixtla is an open-source time series forecasting library. We are helping data scientists and developers to have access to open source state-of-

Nixtla 401 Jan 08, 2023
Neural Machine Translation (NMT) tutorial with OpenNMT-py

Neural Machine Translation (NMT) tutorial with OpenNMT-py. Data preprocessing, model training, evaluation, and deployment.

Yasmin Moslem 29 Jan 09, 2023
Automated Time Series Forecasting

AutoTS AutoTS is a time series package for Python designed for rapidly deploying high-accuracy forecasts at scale. There are dozens of forecasting mod

Colin Catlin 652 Jan 03, 2023
Machine-Learning with python (jupyter)

Machine-Learning with python (jupyter) 머신러닝 야학 작심 10일과 쥬피터 노트북 기반 데이터 사이언스 시작 들어가기전 https://nbviewer.org/ 페이지를 통해서 쥬피터 노트북 내용을 볼 수 있다. 위 페이지에서 현재 레포 기

HyeonWoo Jeong 1 Jan 23, 2022
DistML is a Ray extension library to support large-scale distributed ML training on heterogeneous multi-node multi-GPU clusters

DistML is a Ray extension library to support large-scale distributed ML training on heterogeneous multi-node multi-GPU clusters

27 Aug 19, 2022
Neighbourhood Retrieval (Nearest Neighbours) with Distance Correlation.

Neighbourhood Retrieval with Distance Correlation Assign Pseudo class labels to datapoints in the latent space. NNDC is a slim wrapper around FAISS. N

The Learning Machines 1 Jan 16, 2022
database for artificial intelligence/machine learning data

AIDB v0.0.1 database for artificial intelligence/machine learning data Overview aidb is a database designed for large dataset for machine learning pro

Aarush Gupta 1 Oct 24, 2021
AP1 Transcription Factor Binding Site Prediction

A machine learning project that predicted binding sites of AP1 transcription factor, using ChIP-Seq data and local DNA shape information.

1 Jan 21, 2022
A Streamlit demo to interactively visualize Uber pickups in New York City

Streamlit Demo: Uber Pickups in New York City A Streamlit demo written in pure Python to interactively visualize Uber pickups in New York City. View t

Streamlit 230 Dec 28, 2022
Model Agnostic Confidence Estimator (MACEST) - A Python library for calibrating Machine Learning models' confidence scores

Model Agnostic Confidence Estimator (MACEST) - A Python library for calibrating Machine Learning models' confidence scores

Oracle 95 Dec 28, 2022