Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Overview

Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occam’s Razor?"

Authors: Gonzalo Jaimovitch-López, David Castellano-Falcón, Cèsar Ferri, José Hernández-Orallo

Experiments

GPT-2

The experiment is fully performed on a single Notebook.

When opening the Notebook, just follow the code sections to run the experiment. Note that a file with the experiment results is provided. The results are printed in the corresponding section.

GPT-3

There are different Notebooks which post-process the outputs returned by GPT-3 in the experiment.

You can find two folders: main (for the experiments presented in the main paper) and additional (for the experiments included in the supplementary material).

The use of GPT-3 requires of an API key which cannot be provided with the code. However, the prompts used in the experiment are included in the repository.

If you would like to run the prompt queries in GPT-3, visit the OpenAI´s API Webpage. Make sure you adjust the temperature depending on the experiment you would like to test. Furthermore, note that results obtained with the use of the API from the webpage and the use of the API from the Python environment might differ based on the different encodings.

Main experiments

  1. Temperature = 0

  2. Temperature = 1

Run the lines of code in order. Note that you will have to choose (using the following cell at the top of the notebooks) the desired model to obtain the results.

#Choose between {'ada', 'babbage', 'curie', 'davinci'}
MODEL = 'davinci'

Additional experiments

  1. Alternative alphabet (Apple, Banana)

  2. Separator between characters in input / output

  3. Concepts with loops

  4. Many more concepts / Not using machine teaching

    Run the lines of code in order. Note that you will have to choose (using the following cell at the top of the notebooks) the desired experiment to obtain the results.

#Choose complete_EXPERIMENT.csv being EXPERIMENT {'ada', 'babbage', 'curie', 'davinci', 'EXP_A', 'EXP_B'}
EXPERIMENT = 'ada'
  1. Baselines

MagicHaskeller

MagicHaskeller must be previously installed.

To run the experiment, execute the Python script. The returned functions will be written in the corresponding file depending on the path provided in the script.

From the list of functions (you can find the outputs in this folder), we take the first function from the top of the list and use it as a solution, querying the test examples using Haskell. The summary of the results can be found in MHResults.txt.

Louise

Louise must be previously installed.

First you should run Louise and execute the dedicated script including the different examples where indicated depending on the concept (you can find them in pos_neg_ex.txt).

Subsequently, the evaluation of the test examples (using the predicates returned by the system) is performed in the Notebook.

Humans

We provide a PDF with the questionnaire performed by the human participants in this experiment. Note that the headlines mark the start of each screen that was presented to the participants, as this is not clearly reflected in the PDF version of the form. This can be observed when opening the HTML file, stored in the source code folder.

Additional Material

A Python script is provided to test the P3 functioning.

Finally, the R scripts for the generation of the paper plots are included.

Code for 1st place solution in Sleep AI Challenge SNU Hospital

Sleep AI Challenge SNU Hospital 2021 Code for 1st place solution for Sleep AI Challenge (Note that the code is not fully organized) Refer to the notio

Saewon Yang 13 Jan 03, 2022
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch

D3D Devkit for 3D: Some utils for 3D object detection and tracking based on Numpy and Pytorch Please consider siting my work if you find this library

Jacob Zhong 27 Jul 07, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
JDet is Object Detection Framework based on Jittor.

JDet is Object Detection Framework based on Jittor.

135 Dec 14, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
Alignment Attention Fusion framework for Few-Shot Object Detection

AAF framework Framework generalities This repository contains the code of the AAF framework proposed in this paper. The main idea behind this work is

Pierre Le Jeune 20 Dec 16, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
House3D: A Rich and Realistic 3D Environment

House3D: A Rich and Realistic 3D Environment Yi Wu, Yuxin Wu, Georgia Gkioxari and Yuandong Tian House3D is a virtual 3D environment which consists of

Meta Research 1.1k Dec 14, 2022
Baseline of DCASE 2020 task 4

Couple Learning for SED This repository provides the data and source code for sound event detection (SED) task. The improvement of the Couple Learning

21 Oct 18, 2022
PASTRIE: A Corpus of Prepositions Annotated with Supersense Tags in Reddit International English

PASTRIE Official release of the corpus described in the paper: Michael Kranzlein, Emma Manning, Siyao Peng, Shira Wein, Aryaman Arora, and Nathan Schn

NERT @ Georgetown 4 Dec 02, 2021
This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text"

Iconary This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text". It includes the

AI2 6 May 24, 2022
TensorFlow implementation of ENet, trained on the Cityscapes dataset.

segmentation TensorFlow implementation of ENet (https://arxiv.org/pdf/1606.02147.pdf) based on the official Torch implementation (https://github.com/e

Fredrik Gustafsson 248 Dec 16, 2022
NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions

NeoDTI NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions (Bioinformatics).

62 Nov 26, 2022
Instance-conditional Knowledge Distillation for Object Detection

Instance-conditional Knowledge Distillation for Object Detection This is a MegEngine implementation of the paper "Instance-conditional Knowledge Disti

MEGVII Research 47 Nov 17, 2022
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Changlin Li 127 Dec 26, 2022
EM-POSE 3D Human Pose Estimation from Sparse Electromagnetic Trackers.

EM-POSE: 3D Human Pose Estimation from Sparse Electromagnetic Trackers This repository contains the code to our paper published at ICCV 2021. For ques

Facebook Research 62 Dec 14, 2022
Example for AUAV 2022 with obstacle avoidance.

AUAV 2022 Sample This is a sample PX4 based quadrotor path planning framework based on Ubuntu 20.04 and ROS noetic for the IEEE Autonomous UAS 2022 co

James Goppert 11 Sep 16, 2022
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022