Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Overview

Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occam’s Razor?"

Authors: Gonzalo Jaimovitch-López, David Castellano-Falcón, Cèsar Ferri, José Hernández-Orallo

Experiments

GPT-2

The experiment is fully performed on a single Notebook.

When opening the Notebook, just follow the code sections to run the experiment. Note that a file with the experiment results is provided. The results are printed in the corresponding section.

GPT-3

There are different Notebooks which post-process the outputs returned by GPT-3 in the experiment.

You can find two folders: main (for the experiments presented in the main paper) and additional (for the experiments included in the supplementary material).

The use of GPT-3 requires of an API key which cannot be provided with the code. However, the prompts used in the experiment are included in the repository.

If you would like to run the prompt queries in GPT-3, visit the OpenAI´s API Webpage. Make sure you adjust the temperature depending on the experiment you would like to test. Furthermore, note that results obtained with the use of the API from the webpage and the use of the API from the Python environment might differ based on the different encodings.

Main experiments

  1. Temperature = 0

  2. Temperature = 1

Run the lines of code in order. Note that you will have to choose (using the following cell at the top of the notebooks) the desired model to obtain the results.

#Choose between {'ada', 'babbage', 'curie', 'davinci'}
MODEL = 'davinci'

Additional experiments

  1. Alternative alphabet (Apple, Banana)

  2. Separator between characters in input / output

  3. Concepts with loops

  4. Many more concepts / Not using machine teaching

    Run the lines of code in order. Note that you will have to choose (using the following cell at the top of the notebooks) the desired experiment to obtain the results.

#Choose complete_EXPERIMENT.csv being EXPERIMENT {'ada', 'babbage', 'curie', 'davinci', 'EXP_A', 'EXP_B'}
EXPERIMENT = 'ada'
  1. Baselines

MagicHaskeller

MagicHaskeller must be previously installed.

To run the experiment, execute the Python script. The returned functions will be written in the corresponding file depending on the path provided in the script.

From the list of functions (you can find the outputs in this folder), we take the first function from the top of the list and use it as a solution, querying the test examples using Haskell. The summary of the results can be found in MHResults.txt.

Louise

Louise must be previously installed.

First you should run Louise and execute the dedicated script including the different examples where indicated depending on the concept (you can find them in pos_neg_ex.txt).

Subsequently, the evaluation of the test examples (using the predicates returned by the system) is performed in the Notebook.

Humans

We provide a PDF with the questionnaire performed by the human participants in this experiment. Note that the headlines mark the start of each screen that was presented to the participants, as this is not clearly reflected in the PDF version of the form. This can be observed when opening the HTML file, stored in the source code folder.

Additional Material

A Python script is provided to test the P3 functioning.

Finally, the R scripts for the generation of the paper plots are included.

A simple Neural Network that predicts the label for a series of handwritten digits

Neural_Network A simple Neural Network that predicts the label for a series of handwritten numbers This program tries to predict the label (1,2,3 etc.

Ty 1 Dec 18, 2021
The UI as a mobile display for OP25

OP25 Mobile Control Head A 'remote' control head that interfaces with an OP25 instance. We take advantage of some data end-points left exposed for the

Sarah Rose Giddings 13 Dec 28, 2022
Probabilistic Gradient Boosting Machines

PGBM Probabilistic Gradient Boosting Machines (PGBM) is a probabilistic gradient boosting framework in Python based on PyTorch/Numba, developed by Air

Olivier Sprangers 112 Dec 28, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.

PyAF (Python Automatic Forecasting) PyAF is an Open Source Python library for Automatic Forecasting built on top of popular data science python module

CARME Antoine 405 Jan 02, 2023
🥈78th place in Riiid Answer Correctness Prediction competition

Riiid Answer Correctness Prediction Introduction This repository is the code that placed 78th in Riiid Answer Correctness Prediction competition. Requ

Jungwoo Park 10 Jul 14, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

381 Dec 30, 2022
PyTorch implementation of "Continual Learning with Deep Generative Replay", NIPS 2017

pytorch-deep-generative-replay PyTorch implementation of Continual Learning with Deep Generative Replay, NIPS 2017 Results Continual Learning on Permu

Junsoo Ha 127 Dec 14, 2022
Image augmentation library in Python for machine learning.

Augmentor is an image augmentation library in Python for machine learning. It aims to be a standalone library that is platform and framework independe

Marcus D. Bloice 4.8k Jan 07, 2023
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the offi

789 Jan 04, 2023
Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks

Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks This is our Pytorch implementation for the paper: Zirui Zhu, Chen Gao, Xu C

Zirui Zhu 3 Dec 30, 2022
上海交通大学全自动抢课脚本,支持准点开抢与抢课后持续捡漏两种模式。2021/06/08更新。

Welcome to Course-Bullying-in-SJTU-v3.1! 2021/6/8 紧急更新v3.1 更新说明 为了更好地保护用户隐私,将原来用户名+密码的登录方式改为微信扫二维码+cookie登录方式,不再需要配置使用pytesseract。在使用扫码登录模式时,请稍等,二维码将马

87 Sep 13, 2022
SciFive: a text-text transformer model for biomedical literature

SciFive SciFive provided a Text-Text framework for biomedical language and natural language in NLP. Under the T5's framework and desrbibed in the pape

Long Phan 54 Dec 24, 2022
GE2340 project source code without credentials.

GE2340-Project-Public GE2340 project source code without credentials. Run the bot.py to start the bot Telegram: @jasperwong_ge2340_bot If the bot does

0 Feb 10, 2022
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)

dualFace dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ) We provide python implementations for our CVM 2021 paper "dualFac

Haoran XIE 46 Nov 10, 2022
Reference PyTorch implementation of "End-to-end optimized image compression with competition of prior distributions"

PyTorch reference implementation of "End-to-end optimized image compression with competition of prior distributions" by Benoit Brummer and Christophe

Benoit Brummer 6 Jun 16, 2022
A deep learning model for style-specific music generation.

DeepJ: A model for style-specific music generation https://arxiv.org/abs/1801.00887 Abstract Recent advances in deep neural networks have enabled algo

Henry Mao 704 Nov 23, 2022
GAN-generated image detection based on CNNs

GAN-image-detection This repository contains a GAN-generated image detector developed to distinguish real images from synthetic ones. The detector is

Image and Sound Processing Lab 17 Dec 15, 2022
Dimension Reduced Turbulent Flow Data From Deep Vector Quantizers

Dimension Reduced Turbulent Flow Data From Deep Vector Quantizers This is an implementation of A Physics-Informed Vector Quantized Autoencoder for Dat

DreamSoul 3 Sep 12, 2022
Train DeepLab for Semantic Image Segmentation

Train DeepLab for Semantic Image Segmentation Martin Kersner, [email protected]

Martin Kersner 172 Dec 14, 2022