healthy and lesion models for learning based on the joint estimation of stochasticity and volatility

Overview

health-lesion-stovol

healthy and lesion models for learning based on the joint estimation of stochasticity and volatility

Reference

please cite this paper if you use this code: Piray P and Daw ND, 'A model for learning based on the joint estimation of stochasticity and volatility', 2021, Nature Communications.

Description of the models

This work addresses the problem of learning in noisy environments, in which the agent must draw inferences (e.g., about true reward rates) from observations (individual reward amounts) that are corrupted by two distinct sources of noise: process noise or volatility and observation noise or stochasticity. Volatility captures the speed by which the true value being estimated changes from trial to trial (modeled as Gaussian diffusion); stochasticity describes additional measurement noise in the observation of each outcome around its true value (modeled as Gaussian noise on each trial). The celebrated Kalman filter makes inference based on known value for both stochasticity and volatility, in which volatility and stochasticity have opposite effects on the learning rate (i.e. Kalman gain): whereas volatility increases the learning rate, stochasticity decreases the learning rate.

The learning models implemented here generalize the Kalman filter by also learning both stochasticity and volatility based on observations. An important point is that inferences about volatility and stochasticity are mutually interdependent. But the details of the interdependence are themselves informative. From the learner’s perspective, a challenging problem is to distinguish volatility from stochasticity when both are unknown, because both of them increase the noisiness of observations. Disentangling their respective contributions requires trading off two opposing explanations for the pattern of observations, a process known in Bayesian probability theory as explaining away. This insight results in two lesion models: a stochasticity lesion model that tends to misidentify stochasticity as volatility and inappropriately increases learning rates; and a volatility lesion model that tends to misidentify volatility as stochasticity and inappropriately decreases learning rates.

Description of the code

learning_models.py contains two classes of learning models:

  1. LearningModel that includes the healthy model and two lesion models (stochasticity lesion and volatility lesion models)
  2. LearningModelGaussian is similar to LearningModel with the Gaussian generative processes for stochasticity and volatility diffusion.

Inference in both classes is based on a combination of particle filter and Kalman filter. Given particles for stochasticity and volatility, the Kalman filter updates its estimation of the mean and variance of the state (e.g. reward rate). The main results shown in the reference paper (see below) is very similar for both classes of generative process. The particle filter has been implemented in the particle_filter.py

sim_example.py simulates the healthy model in a 2x2 factorial design (with two different true values for both true stochasticity and volatility). The model does not know about the true values and should learn them from observations. Initial values for both stochasticity and volatility are assumed to be the mean of their corresponding true values (and so not helpful for dissociation). This is akin to Figure 2 of the reference paper.

sim_lesion_example.py also simulates the lesions models in the 2x2 factorial design described above. This is akin to Figure 3 of the reference paper.

Dependencies:

numpy (required for computations in particle_filter.py and learning_models.py) matplotlib (required for visualization in sim_example and sim_lesion_example) seaborn (required for visualization in sim_example and sim_lesion_example) pandas (required for visualization in sim_example and sim_lesion_example)

Other languages

The MATLAB implementation of the model is also available: https://github.com/payampiray/stochasticity_volatility_learning

Author

Payam Piray (ppiray [at] princeton.edu)

pywFM is a Python wrapper for Steffen Rendle's factorization machines library libFM

pywFM pywFM is a Python wrapper for Steffen Rendle's libFM. libFM is a Factorization Machine library: Factorization machines (FM) are a generic approa

João Ferreira Loff 251 Sep 23, 2022
About Solve CTF offline disconnection problem - based on python3's small crawler

About Solve CTF offline disconnection problem - based on python3's small crawler, support keyword search and local map bed establishment, currently support Jianshu, xianzhi,anquanke,freebuf,seebug

天河 32 Oct 25, 2022
#30DaysOfStreamlit is a 30-day social challenge for you to build and deploy Streamlit apps.

30 Days Of Streamlit 🎈 This is the official repo of #30DaysOfStreamlit — a 30-day social challenge for you to learn, build and deploy Streamlit apps.

Streamlit 53 Jan 02, 2023
A library to generate synthetic time series data by easy-to-use factors and generator

timeseries-generator This repository consists of a python packages that generates synthetic time series dataset in a generic way (under /timeseries_ge

Nike Inc. 87 Dec 20, 2022
Penguins species predictor app is used to classify penguins species created using python's scikit-learn, fastapi, numpy and joblib packages.

Penguins Classification App Penguins species predictor app is used to classify penguins species using their island, sex, bill length (mm), bill depth

Siva Prakash 3 Apr 05, 2022
Add built-in support for quaternions to numpy

Quaternions in numpy This Python module adds a quaternion dtype to NumPy. The code was originally based on code by Martin Ling (which he wrote with he

Mike Boyle 531 Dec 28, 2022
Bottleneck a collection of fast, NaN-aware NumPy array functions written in C.

Bottleneck Bottleneck is a collection of fast, NaN-aware NumPy array functions written in C. As one example, to check if a np.array has any NaNs using

Python for Data 835 Dec 27, 2022
machine learning model deployment project of Iris classification model in a minimal UI using flask web framework and deployed it in Azure cloud using Azure app service

This is a machine learning model deployment project of Iris classification model in a minimal UI using flask web framework and deployed it in Azure cloud using Azure app service. We initially made th

Krishna Priyatham Potluri 73 Dec 01, 2022
A collection of Scikit-Learn compatible time series transformers and tools.

tsfeast A collection of Scikit-Learn compatible time series transformers and tools. Installation Create a virtual environment and install: From PyPi p

Chris Santiago 0 Mar 30, 2022
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.

AutoTabular AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just

wenqi 2 Jun 26, 2022
dirty_cat is a Python module for machine-learning on dirty categorical variables.

dirty_cat dirty_cat is a Python module for machine-learning on dirty categorical variables.

637 Dec 29, 2022
Cryptocurrency price prediction and exceptions in python

Cryptocurrency price prediction and exceptions in python This is a coursework on foundations of computing module Through this coursework i worked on m

Panagiotis Sotirellos 1 Nov 07, 2021
Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Felix Daudi 1 Jan 06, 2022
Management of exclusive GPU access for distributed machine learning workloads

TensorHive is an open source tool for managing computing resources used by multiple users across distributed hosts. It focuses on granting

Paweł Rościszewski 131 Dec 12, 2022
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

Hivemind: decentralized deep learning in PyTorch Hivemind is a PyTorch library to train large neural networks across the Internet. Its intended usage

1.3k Jan 08, 2023
This handbook accompanies the course: Machine Learning with Hung-Yi Lee

This handbook accompanies the course: Machine Learning with Hung-Yi Lee

RenChu Wang 472 Dec 31, 2022
Stats, linear algebra and einops for xarray

xarray-einstats Stats, linear algebra and einops for xarray ⚠️ Caution: This project is still in a very early development stage Installation To instal

ArviZ 30 Dec 28, 2022
A linear equation solver using gaussian elimination. Implemented for fun and learning/teaching.

A linear equation solver using gaussian elimination. Implemented for fun and learning/teaching. The solver will solve equations of the type: A can be

Sanjeet N. Dasharath 3 Feb 15, 2022
Fundamentals of Machine Learning

Fundamentals-of-Machine-Learning This repository introduces the basics of machine learning algorithms for preprocessing, regression and classification

Happy N. Monday 3 Feb 15, 2022
Little Ball of Fur - A graph sampling extension library for NetworKit and NetworkX (CIKM 2020)

Little Ball of Fur is a graph sampling extension library for Python. Please look at the Documentation, relevant Paper, Promo video and External Resour

Benedek Rozemberczki 619 Dec 14, 2022