This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

Overview

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

Usage

To replicate our results in Section 4, run:

python3 prompt_tune.py \
    --save-dir ../runs/prompt_tuned_sec4/ \
    --prompt-path ../data/binary_NLI_prompts.csv \
    --experiment-name sec4 \
    --few-shots 3,5,10,20,30,50,100,250 \
    --production \
    --seeds 1

Add --fully-train if you want to train on the entire training set in addition to few-shot settings.

To replicate Section 5, run:

python3 prompt_tune.py \
    --save-dir ../runs/prompt_tuned_sec5/ \
    --prompt-path ../data/binary_NLI_prompts_permuted.csv \
    --experiment-name sec5 \
    --few-shots 3,5,10,20,30,50,100,250 \
    --production \
    --seeds 1

To get a fine-tuning baseline (Figure 1):

python3 fine_tune.py \
    --save-dir ../runs/fine_tune/ \
    --epochs 5 \
    --few-shots 3,5,10,20,30,50,100,250 \
    --fully-train \
    --production \
    --seeds 1

To replicate our exact results, use --seeds 1,2,3,4,5,6,7,8, which yields starting_example_index of 550,231,974,966,1046,2350,1326,928 respectively. This is important for ensuring that all models trained under the same seed always see exactly the same training examples. See paper Section 3 for more details.

If these seeds do not generate the same starting_example_index for you (which you can check in the output CSV files), you will have to manually specify the few-shot subset of training examples. I plan to add an argparse argument for this to make it easy.

All other hyperparameters are the same as the argparse default.

Miscellaneous Notes

You might notice that the code and output files are set up to produce a fine-grained analysis of HANS (McCoy et al., 2019). We actually run all of our main experiments on HANS as well and got similar results, which we plan to write up in a future version of our paper. Meanwhile, if you’re curious, feel free to add --do-diagnosis which will report the results on HANS.

Requirements

Python 3.9.

3.7 should mostly work too. You’d have to just replace the new built-in type hints and dictionary union operators with their older equivalents.

Activate your preferred virtual envrionment and then run pip install -r requirements.txt. If you want to replicate our exact results, use

torch==1.9.0+cu111
transformers==4.9.2
datasets==1.11.0
Owner
Albert Webson
Computer science PhD by day. Philosophy MA by night. Advised by Ellie Pavlick at Brown University.
Albert Webson
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

102 Dec 25, 2022
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022
Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Think Big, Teach Small: Do Language Models Distil Occam’s Razor? Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occa

0 Dec 07, 2021
利用yolov5和TensorRT从0到1实现目标检测的模型训练到模型部署全过程

写在前面 利用TensorRT加速推理速度是以时间换取精度的做法,意味着在推理速度上升的同时将会有精度的下降,不过不用太担心,精度下降微乎其微。此外,要有NVIDIA显卡,经测试,CUDA10.2可以支持20系列显卡及以下,30系列显卡需要CUDA11.x的支持,并且目前有bug。 默认你已经完成了

Helium 6 Jul 28, 2022
This repository contains the code for the paper in EMNLP 2021: "HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression".

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression This repository contains the code for the paper in EM

Chenhe Dong 2 Mar 24, 2022
Official PaddlePaddle implementation of Paint Transformer

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Paddle Implementation] Update We have optimized the serial inference p

TianweiLin 284 Dec 31, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
Learning to Reach Goals via Iterated Supervised Learning

Vanilla GCSL This repository contains a vanilla implementation of "Learning to Reach Goals via Iterated Supervised Learning" proposed by Dibya Gosh et

Christoph Heindl 4 Aug 10, 2022
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 126 Dec 22, 2022
Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR, 2019)

Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR 2019) To make better use of given limited labels, we propo

126 Sep 13, 2022
NICE-GAN — Official PyTorch Implementation Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

NICE-GAN-pytorch - Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

Runfa Chen 208 Nov 25, 2022
MonoRCNN is a monocular 3D object detection method for automonous driving

MonoRCNN MonoRCNN is a monocular 3D object detection method for automonous driving, published at ICCV 2021. This project is an implementation of MonoR

87 Dec 27, 2022
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)

Residual Dense Network for Image Super-Resolution This repository is for RDN introduced in the following paper Yulun Zhang, Yapeng Tian, Yu Kong, Bine

Yulun Zhang 494 Dec 30, 2022
Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019)

Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019) Introduction Official implementation of Adaptive Pyramid Context Network

21 Nov 09, 2022
🌊 Online machine learning in Python

In a nutshell River is a Python library for online machine learning. It is the result of a merger between creme and scikit-multiflow. River's ambition

OnlineML 4k Jan 02, 2023
Convnet transfer - Code for paper How transferable are features in deep neural networks?

How transferable are features in deep neural networks? This repository contains source code necessary to reproduce the results presented in the follow

Jason Yosinski 143 Sep 13, 2022
Official implementation of the Implicit Behavioral Cloning (IBC) algorithm

Implicit Behavioral Cloning This codebase contains the official implementation of the Implicit Behavioral Cloning (IBC) algorithm from our paper: Impl

Google Research 210 Dec 09, 2022
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 25.5k Jan 07, 2023
Source code for the Paper: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints}

CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints Installation Run pipenv install (at your own risk with --skip-lo

Autonomous Learning Group 65 Dec 27, 2022
[BMVC 2021] Official PyTorch Implementation of Self-supervised learning of Image Scale and Orientation Estimation

Self-Supervised Learning of Image Scale and Orientation Estimation (BMVC 2021) This is the official implementation of the paper "Self-Supervised Learn

Jongmin Lee 17 Nov 10, 2022