An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

Related tags

Deep LearningRASP
Overview

RASP

Setup

Mac or Linux

Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to install graphviz (the non-python part) and rlwrap on your machine. If these fail, you will still be able to use RASP, however: the interface will not be as nice without rlwrap, and drawing s-op computation flows will not be possible without graphviz. After having set up, you can run ./rasp.sh to start the RASP read-evaluate-print-loop.

Windows

Follow the instructions given in windows instructions.txt

The REPL

After having set up, if you are in mac/linux, you can run ./rasp.sh to start the RASP REPL. Otherwise, run python3 RASP_support/REPL.py Use Ctrl+C to quit a partially entered command, and Ctrl+D to exit the REPL.

Initial Environment

RASP starts with the base s-ops: tokens, indices, and length. It also has the base functions select, aggregate, and selector_width as described in the paper, a selector full_s created through select(1,1,==) that creates a "full" attention pattern, and several other library functions (check out RASP_support/rasplib.rasp to see them).

Additionally, the REPL begins with a base example, "hello", on which it shows the output for each created s-op or selector. This example can be changed, and toggled on and off, through commands to the REPL.

All RASP commands end with a semicolon. Commands to the REPL -- such as changing the base example -- do not.

Start by following along with the examples -- they are kept at the bottom of this readme.

Note on input types:

RASP expects inputs in four forms: strings, integers, floats, or booleans, handled respectively by tokens_str, tokens_int, tokens_float, and tokens_bool. Initially, RASP loads with tokens set to tokens_str, this can be changed by assignment, e.g.: tokens=tokens_int;. When changing the input type, you will also want to change the base example, e.g.: set example [0,1,2].

Note that assignments do not retroactively change the computation trees of existing s-ops!

Writing and Loading RASP files

To keep and load RASP code from files, save them with .rasp as the extension, and use the 'load' command without the extension. For example, you can load the examples file paper_examples.rasp in this repository to the REPL as follows:

>> load "paper_examples";

This will make (almost) all values in the file available in the loading environment (whether the REPL, or a different .rasp file): values whose names begin with an underscore remain private to the file they are written in. Loading files in the REPL will also print a list of all loaded values.

Syntax Highlighting

For the Sublime Text editor, you can get syntax highlighting for .rasp files as follows:

  1. Install package control for sublime (you might already have it: look in the menu [Sublime Text]->[Preferences] and see if it's there. If not, follow the instructions at https://packagecontrol.io/installation).
  2. Install the 'packagedev' package through package control ([Sublime Text]->[Preferences]->[Package Control], then type [install package], then [packagedev])
  3. After installing PackageDev, create a new syntax definition file through [Tools]->[Packages]->[Package Development]->[New Syntax Definition].
  4. Copy the contents of RASP_support/RASP.sublime-syntax into the new syntax definition file, and save it as RASP.sublime-syntax.

[Above is basically following the instructions in http://ilkinulas.github.io/programming/2016/02/05/sublime-text-syntax-highlighting.html , and then copying in the contents of the provided RASP.sublime-syntax file]

Examples

Play around in the REPL!

Try simple elementwise manipulations of s-ops:

>>  threexindices =3 * indices;
     s-op: threexindices
 	 Example: threexindices("hello") = [0, 3, 6, 9, 12] (ints)
>> indices+indices;
     s-op: out
 	 Example: out("hello") = [0, 2, 4, 6, 8] (ints)

Change the base example, and create a selector that focuses each position on all positions before it:

>> set example "hey"
>> prevs=select(indices,indices,<);
     selector: prevs
 	 Example:
 			     h e y
 			 h |      
 			 e | 1    
 			 y | 1 1  

Check the output of an s-op on your new base example:

>> threexindices;
     s-op: threexindices
 	 Example: threexindices("hey") = [0, 3, 6] (ints)

Or on specific inputs:

>> threexindices(["hi","there"]);
	 =  [0, 3] (ints)
>> threexindices("hiya");
	 =  [0, 3, 6, 9] (ints)

Aggregate with the full selection pattern (loaded automatically with the REPL) to compute the proportion of a letter in your input:

>> full_s;
     selector: full_s
 	 Example:
 			     h e y
 			 h | 1 1 1
 			 e | 1 1 1
 			 y | 1 1 1
>> my_frac=aggregate(full_s,indicator(tokens=="e"));
     s-op: my_frac
 	 Example: my_frac("hey") = [0.333]*3 (floats)

Note: when an s-op's output is identical in all positions, RASP simply prints the output of one position, followed by " * X" (where X is the sequence length) to mark the repetition.

Check if a letter is in your input at all:

>> "e" in tokens;
     s-op: out
 	 Example: out("hey") = [T]*3 (bools)

Alternately, in an elementwise fashion, check if each of your input tokens belongs to some group:

>> vowels = ["a","e","i","o","u"];
     list: vowels = ['a', 'e', 'i', 'o', 'u']
>> tokens in vowels;
     s-op: out
 	 Example: out("hey") = [F, T, F] (bools)

Draw the computation flow for an s-op you have created, on an input of your choice: (this will create a pdf in a subfolder comp_flows of the current directory)

>> draw(my_frac,"abcdeeee");
	 =  [0.5]*8 (floats)

Or simply on the base example:

>> draw(my_frac);
	 =  [0.333]*3 (floats)

If they bother you, turn the examples off, and bring them back when you need them:

>> examples off
>> indices;
     s-op: indices
>> full_s;
     selector: full_s
>> examples on
>> indices;
     s-op: indices
 	 Example: indices("hey") = [0, 1, 2] (ints)

You can also do this selectively, turning only selector or s-op examples on and off, e.g.: selector examples off.

Create a selector that focuses each position on all other positions containing the same token. But first, set the base example to "hello" for a better idea of what's happening:

>> set example "hello"
>> same_token=select(tokens,tokens,==);
     selector: same_token
 	 Example:
 			     h e l l o
 			 h | 1        
 			 e |   1      
 			 l |     1 1  
 			 l |     1 1  
 			 o |         1

Then, use selector_width to compute, for each position, how many other positions the selector same_token focuses it on. This effectively computes an in-place histogram over the input:

>> histogram=selector_width(same_token);
     s-op: histogram
 	 Example: histogram("hello") = [1, 1, 2, 2, 1] (ints)

For more complicated examples, check out paper_examples.rasp!

Experiments on Transformers

The transformers in the paper were trained, and their attention heatmaps visualised, using the code in this repository: https://github.com/tech-srl/RASP-exps

GMFlow: Learning Optical Flow via Global Matching

GMFlow GMFlow: Learning Optical Flow via Global Matching Authors: Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, Dacheng Tao We streamline the

Haofei Xu 298 Jan 04, 2023
Official release of MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis of Pancreatic Cancer axriv: http://arxiv.org/abs/2112.13513

MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis This is the official page of the MSHT with its experimental script and records. We de

Tianyi Zhang 53 Dec 27, 2022
Taichi Course Homework Template

太极图形课S1-标题部分 这个作业未来或将是你的开源项目,标题的内容可以来自作业中的核心关键词,让读者一眼看出你所完成的工作/做出的好玩demo 如果暂时未想好,起名时可以参考“太极图形课S1-xxx作业” 如下是作业(项目)展开说明的方法,可以帮大家理清思路,并且也对读者非常友好,请小伙伴们多多参

TaichiCourse 30 Nov 19, 2022
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
Vector Quantization, in Pytorch

Vector Quantization - Pytorch A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a

Phil Wang 665 Jan 08, 2023
I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform some analysis,,

Virtual-Artificial-Intelligence-genesis- I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform

AKASH M 1 Nov 05, 2021
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
A PyTorch implementation of unsupervised SimCSE

A PyTorch implementation of unsupervised SimCSE

99 Dec 23, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
E2VID_ROS - E2VID_ROS: E2VID to a real-time system

E2VID_ROS Introduce We extend E2VID to a real-time system. Because Python ROS ca

Robin Shaun 7 Apr 17, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
Building blocks for uncertainty-aware cycle consistency presented at NeurIPS'21.

UncertaintyAwareCycleConsistency This repository provides the building blocks and the API for the work presented in the NeurIPS'21 paper Robustness vi

EML Tübingen 19 Dec 12, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
This is the code used in the paper "Entity Embeddings of Categorical Variables".

This is the code used in the paper "Entity Embeddings of Categorical Variables". If you want to get the original version of the code used for the Kagg

Cheng Guo 845 Nov 29, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
Source code for deep symbolic optimization.

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. Th

Brenden Petersen 290 Dec 25, 2022
Deep Hedging Demo - An Example of Using Machine Learning for Derivative Pricing.

Deep Hedging Demo Pricing Derivatives using Machine Learning 1) Jupyter version: Run ./colab/deep_hedging_colab.ipynb on Colab. 2) Gui version: Run py

Yu Man Tam 102 Jan 06, 2023
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
Torch-ngp - A pytorch implementation of the hash encoder proposed in instant-ngp

HashGrid Encoder (WIP) A pytorch implementation of the HashGrid Encoder from ins

hawkey 1k Jan 01, 2023