Code for the paper "Attention Approximates Sparse Distributed Memory"

Overview

Attention Approximates Sparse Distributed Memory - Codebase

This is all of the code used to run analyses in the paper "Attention Approximates Sparse Distributed Memory" by Trenton Bricken and Cengiz Pehlevan.

Abstract

While Attention has come to be an important mechanism in deep learning, there remains limited intuition for why it works so well. Here, we show that Transformer Attention can be closely related under certain data conditions to Kanerva's Sparse Distributed Memory (SDM), a biologically plausible associative memory model. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.

Summary of Paper

The main contribution of this paper is to show that the Sparse Distributed Memory (SDM) theory developed in 1988 for how memories are written to and read from neurons, is a very close approximation to the heuristically developed and powerful Transformer Attention. This connection is compelling because SDM has biologically plausibility with the cerebellum in particular. SDM has a number of additional desireable properties that may lead to improvements in Deep Learning including (citations and explations for these statements provided in the paper):

  • Capable of modelling both auto and heteroassociative relationships.
  • Symbolic representations enabling variable binding, learning from example, analogical reasoning, and generalization.
  • Sparsity providing computational efficiency and robustness to noise.
  • Biological plausibility with striking similiarities to the cerebellum. Similarities that warrant further investigation are also present in cortical columns, the hippocampus, dorsal cochlear nucleus, and olfactory system in humans, insects and potentially even cephalopods.
  • Psychological plausibility including explaining the robust, distributed nature of memories, speed of recognition, tip of the tongue phenomena, Small World network between concepts.
  • Additional strong similarities to the Neural Turing Machine (NTM), and Differentiable Neural Computer (DNC).

Description of the Codebase

Jupyter Notebooks:

Used to run all code.

  • Softmax_Circle_Approx.ipynb - Computes the approximate circle intersection and shows how it relates to the softmax via the log linear regression to fit Beta in the exponential. This is the core contribution of our paper.

  • Exp_Approx_Circle_Intersect.ipynb - Implements and tests how well the exponential upper and lower bounds analytically derived for the circle intersection perform.

  • SDM_Experiments.ipynb - Calls on functions in Implementations_Associative_Memory.py and Data_Processing_Associative_Memory.py to test all of the Associative Memory algorithms considered: Neuron Based SDM; Pattern Based SDM with Infinite Neurons; Pattern Based SDM with Finite Neurons; Hopfield Network; Binary SDM with Attention with learnt Beta; SDM Attention with learnt Beta; Transformer Attention.

  • LearnProjections.ipynb - Also calls on functions in Implementations_Associative_Memory.py to learn a projection matrix for the MNIST and CIFAR datasets before testing how it affects the performance of continuous vectors that use three different weightings: Binary SDM Circle Intersection, Continuous SDM Hypersphere Cap Intersection, Attention Softmax with a Beta fitted to Binary SDM.

  • Neuron_Address_Distribution.ipynb - Computes the probability that at least one neuron is within a given Hamming distance of a random query.

  • SDM_Critical_Distances.ipynb - Plots the Critical Distances under different parameter assumptions.

  • HugFace/Transformer_Empirical_Analysis.ipynb - Computes the Betas used in the trained GPT models with the decided upon text inputs. This jupyter notebook is in this directory that implements a customized version of the Hugging Face transformer repo: https://github.com/huggingface/transformers. It was necessary to modify the code base in order to get out the query matrices before their dot product with the keys in the softmax operation.

  • Parse_KeyQ_Norm_Betas.ipynb - Parses and plots the KeyQuery Norm learnt Beta values.

  • Compute_Difference_In_Circle_Intersects.ipynb - Computing how the circle intersection implementations are different from those presented in the SDM book. Also comparing the Circle Intersection equation derived in the Appendix to that of the book. Finally, comparing the associated variance equation from the book with that of Jaeckel's Alterative SDM Design (presented and outlined in the paper Appendix).

  • Optimal_d.ipynb - Computing the Signal to Noise Ratio and Memory Capacity Optimal Hamming Distances.

  • Miscellaneous.ipynb - the name says it all. Different experiments and functions not used in the paper.

Python Scripts:

Supporting functions for the Jupyter Notebooks.

  • SDM_Circ_Inter_Funcs.py - Contains lots of heavily used functions including implementing the circle intersection function and fitting the log linear regression to the circle intersection.

  • Implementations_Associative_Memory.py - Handles the algorithmic implementations of all Associative Memory models considered.

  • utils_LearningProjections.py - Called by LearnProjections.ipynb, leverages functions from Implementations_Associative_Memory.py but wraps them in Pytorch backpropagation to learn the projection matrix.

  • Data_Processing_Associative_Memory.py - Applies random perturbations to continuous and binary data inputs to then evaluate the autoassociative convergence properties of various algorithms.

Folders:

  • figures/ - contains all of the figures used in the paper and additional ones. Aside from those generated by HugFace/Transformer_Empirical_Analysis.ipynb that are located in the next bullet point:

  • HugFace/GPT2Outputs/ - contains all of the GPT2 Transformer analysis figures. Generated by HugFace/Transformer_Empirical_Analysis.ipynb.

  • trained_weights/ - trained weights of the projection matrix for each dataset, Hamming radius and random initalization.

Data:

  • KeyQuery_Norm_Learnt_Betas.txt - Learnt Beta values from the Trained Transformer models of the paper: A. Henry, Prudhvi Raj Dachapally, S. Pawar, and Yuxuan Chen. Query-key normalization for transformers. In EMNLP, 2020.

  • HugFace/text_inputs.txt - line separated text inputs put into GPT2 to infer it's effective Betas. This text is used by HugFace/Transformer_Empirical_Analysis.ipynb.

Dependencies

Tested with Python 3.7.5 (should work with Python 3.5 and higher).

To run HugFace/Transformer_Empirical_Analysis.ipynb you will need to install Pytorch 1.5.1 (using CUDA or not depending on if you have a GPU) https://pytorch.org/get-started/locally/

If using Pip out of the box cd to this directory then use: pip3 install -r SDM/requirements.txt

If using Conda then ensure pip is installed with conda and then run the same above code.

Do not install (or uninstall if it is already installed) HuggingFace/transformers. As you will need to run the customized version implemented in the HugFace/ directory. cd to this directory then run: pip install -e . In trying to run this there may be a couple additional random dependencies it expects like tdqm but these are straightforward to install when and if prompted.

Acknowledgements:

Thanks to the open source community, friends and advisors for making this research possible. This includes but is not limited to:

Dr. Gabriel Kreiman, Alex Cuozzo, Miles Turpin, Dr. Pentti Kanerva, Joe Choo-Choy, Dr. Beren Millidge, Jacob Zavatone-Veth, Blake Bordelon, Nathan Rollins, Alan Amin, Max Farrens, David Rein, Sam Eure, Grace Bricken, and Davis Brown for providing invaluable inspiration, discussions and feedback. Special thanks to Miles Turpin for help working with the Transformer model experiments. We would also like to thank the open source software contributors that helped make this research possible, including but not limited to: Numpy, Pandas, Scipy, Matplotlib, PyTorch, HuggingFace, and Anaconda.

Codebase Author:

License:

This project is licensed under the MIT License - see the LICENSE.md file for details

Owner
Trenton Bricken
PhD student in Systems, Synthetic and Quantitative Biology @harvard.
Trenton Bricken
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL)

Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL) A preprint version of our paper: Link here This is a samp

Di Zhuang 3 Jan 08, 2023
Implementation of our paper "Video Playback Rate Perception for Self-supervised Spatio-Temporal Representation Learning".

PRP Introduction This is the implementation of our paper "Video Playback Rate Perception for Self-supervised Spatio-Temporal Representation Learning".

yuanyao366 39 Dec 29, 2022
Face Recognition Attendance Project

Face-Recognition-Attendance-Project In This Project You will learn how to mark attendance using face recognition, Hello Guys This is Gautam Kumar, Thi

Gautam Kumar 1 Dec 03, 2022
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
Medical Insurance Cost Prediction using Machine earning

Medical-Insurance-Cost-Prediction-using-Machine-learning - Here in this project, I will use regression analysis to predict medical insurance cost for people in different regions, and based on several

1 Dec 27, 2021
git《Tangent Space Backpropogation for 3D Transformation Groups》(CVPR 2021) GitHub:1]

LieTorch: Tangent Space Backpropagation Introduction The LieTorch library generalizes PyTorch to 3D transformation groups. Just as torch.Tensor is a m

Princeton Vision & Learning Lab 482 Jan 06, 2023
Causal Imitative Model for Autonomous Driving

Causal Imitative Model for Autonomous Driving Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi. arXiv 2021. [Projec

VITA lab at EPFL 8 Oct 04, 2022
CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches This document describes how to install and use CRISCE (CRItical

Chair of Software Engineering II, Uni Passau 2 Feb 09, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
Semantic Segmentation with SegFormer on Drone Dataset.

SegFormer_Segmentation Semantic Segmentation with SegFormer on Drone Dataset. You can check out the blog on Medium You can also try out the model with

Praneet 8 Oct 20, 2022
Sequence-tagging using deep learning

Classification using Deep Learning Requirements PyTorch version = 1.9.1+cu111 Python version = 3.8.10 PyTorch-Lightning version = 1.4.9 Huggingface

Vineet Kumar 2 Dec 20, 2022
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
Code for Active Learning at The ImageNet Scale.

Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.

Zeyad Emam 47 Dec 12, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 01, 2022
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Tackgeun 21 Nov 20, 2022
A2LP for short, ECCV2020 spotlight, Investigating SSL principles for UDA problems

Label-Propagation-with-Augmented-Anchors (A2LP) Official codes of the ECCV2020 spotlight (label propagation with augmented anchors: a simple semi-supe

20 Oct 27, 2022
VQGAN+CLIP Colab Notebook with user-friendly interface.

VQGAN+CLIP and other image generation system VQGAN+CLIP Colab Notebook with user-friendly interface. Latest Notebook: Mse regulized zquantize Notebook

Justin John 227 Jan 05, 2023
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark Yong

19 Dec 17, 2022