This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"

Overview

Pattern-Exploiting Training (PET)

This repository contains the code for Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference and It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. The papers introduce pattern-exploiting training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases. In low-resource settings, PET and iPET significantly outperform regular supervised training, various semi-supervised baselines and even GPT-3 despite requiring 99.9% less parameters. The iterative variant of PET (iPET) trains multiple generations of models and can even be used without any training data.

#Examples Training Mode Yelp (Full) AG's News Yahoo Questions MNLI
0 unsupervised 33.8 69.5 44.0 39.1
iPET 56.7 87.5 70.7 53.6
100 supervised 53.0 86.0 62.9 47.9
PET 61.9 88.3 69.2 74.7
iPET 62.9 89.6 71.2 78.4

Note: To exactly reproduce the above results, make sure to use v1.1.0 (--branch v1.1.0).

📑 Contents

🔧 Setup

💬 CLI Usage

💻 API Usage

🐶 Train your own PET

📕 Citation

🔧 Setup

All requirements for PET can be found in requirements.txt. You can install all required packages with pip install -r requirements.txt.

💬 CLI Usage

The command line interface cli.py in this repository currently supports three different training modes (PET, iPET, supervised training), two additional evaluation methods (unsupervised and priming) and 13 different tasks. For Yelp Reviews, AG's News, Yahoo Questions, MNLI and X-Stance, see the original paper for further details. For the 8 SuperGLUE tasks, see this paper.

PET Training and Evaluation

To train and evaluate a PET model for one of the supported tasks, simply run the following command:

python3 cli.py \
--method pet \
--pattern_ids $PATTERN_IDS \
--data_dir $DATA_DIR \
--model_type $MODEL_TYPE \
--model_name_or_path $MODEL_NAME_OR_PATH \
--task_name $TASK \
--output_dir $OUTPUT_DIR \
--do_train \
--do_eval

where

  • $PATTERN_IDS specifies the PVPs to use. For example, if you want to use all patterns, specify PATTERN_IDS 0 1 2 3 4 for AG's News and Yahoo Questions or PATTERN_IDS 0 1 2 3 for Yelp Reviews and MNLI.
  • $DATA_DIR is the directory containing the train and test files (check tasks.py to see how these files should be named and formatted for each task).
  • $MODEL_TYPE is the name of the model being used, e.g. albert, bert or roberta.
  • $MODEL_NAME is the name of a pretrained model (e.g., roberta-large or albert-xxlarge-v2) or the path to a pretrained model.
  • $TASK_NAME is the name of the task to train and evaluate on.
  • $OUTPUT_DIR is the name of the directory in which the trained model and evaluation results are saved.

You can additionally specify various training parameters for both the ensemble of PET models corresponding to individual PVPs (prefix --pet_) and for the final sequence classification model (prefix --sc_). For example, the default parameters used for our SuperGLUE evaluation are:

--pet_per_gpu_eval_batch_size 8 \
--pet_per_gpu_train_batch_size 2 \
--pet_gradient_accumulation_steps 8 \
--pet_max_steps 250 \
--pet_max_seq_length 256 \
--pet_repetitions 3 \
--sc_per_gpu_train_batch_size 2 \
--sc_per_gpu_unlabeled_batch_size 2 \
--sc_gradient_accumulation_steps 8 \
--sc_max_steps 5000 \
--sc_max_seq_length 256 \
--sc_repetitions 1

For each pattern $P and repetition $I, running the above command creates a directory $OUTPUT_DIR/p$P-i$I that contains the following files:

  • pytorch_model.bin: the finetuned model, possibly along with some model-specific files (e.g, spiece.model, special_tokens_map.json)
  • wrapper_config.json: the configuration of the model being used
  • train_config.json: the configuration used for training
  • eval_config.json: the configuration used for evaluation
  • logits.txt: the model's predictions on the unlabeled data
  • eval_logits.txt: the model's prediction on the evaluation data
  • results.json: a json file containing results such as the model's final accuracy
  • predictions.jsonl: a prediction file for the evaluation set in the SuperGlue format

The final (distilled) model for each repetition $I can be found in $OUTPUT_DIR/final/p0-i$I, which contains the same files as described above.

🚨 If your GPU runs out of memory during training, you can try decreasing both the pet_per_gpu_train_batch_size and the sc_per_gpu_unlabeled_batch_size while increasing both pet_gradient_accumulation_steps and sc_gradient_accumulation_steps.

iPET Training and Evaluation

To train and evaluate an iPET model for one of the supported tasks, simply run the same command as above, but replace --method pet with --method ipet. There are various additional iPET parameters that you can modify; all of them are prefixed with --ipet_.

For each generation $G, pattern $P and iteration $I, this creates a directory $OUTPUT_DIR/g$G/p$P-i$I that is structured as for regular PET. The final (distilled) model can again be found in $OUTPUT_DIR/final/p0-i$I.

🚨 If you use iPET with zero training examples, you need to specify how many examples for each label should be chosen in the first generation and you need to change the reduction strategy to mean: --ipet_n_most_likely 100 --reduction mean.

Supervised Training and Evaluation

To train and evaluate a regular sequence classifier in a supervised fashion, simply run the same command as above, but replace --method pet with --method sequence_classifier. There are various additional parameters for the sequence classifier that you can modify; all of them are prefixed with --sc_.

Unsupervised Evaluation

To evaluate a pretrained language model with the default PET patterns and verbalizers, but without fine-tuning, remove the argument --do_train and add --no_distillation so that no final distillation is performed.

Priming

If you want to use priming, remove the argument --do_train and add the arguments --priming --no_distillation so that all training examples are used for priming and no final distillation is performed.

🚨 Remember that you may need to increase the maximum sequence length to a much larger value, e.g. --pet_max_seq_length 5000. This only works with language models that support such long sequences, e.g. XLNet. For using XLNet, you can specify --model_type xlnet --model_name_or_path xlnet-large-cased --wrapper_type plm.

💻 API Usage

Instead of using the command line interface, you can also directly use the PET API, most of which is defined in pet.modeling. By including import pet, you can access methods such as train_pet, train_ipet and train_classifier. Check out their documentation for more information.

🐶 Train your own PET

To use PET for custom tasks, you need to define two things:

  • a DataProcessor, responsible for loading training and test data. See examples/custom_task_processor.py for an example.
  • a PVP, responsible for applying patterns to inputs and mapping labels to natural language verbalizations. See examples/custom_task_pvp.py for an example.

After having implemented the DataProcessor and the PVP, you can train a PET model using the command line as described above. Below, you can find additional information on how to define the two components of a PVP, verbalizers and patterns.

Verbalizers

Verbalizers are used to map task labels to words in natural language. For example, in a binary sentiment classification task, you could map the positive label (+1) to the word good and the negative label (-1) to the word bad. Verbalizers are realized through a PVP's verbalize() method. The simplest way of defining a verbalizer is to use a dictionary:

VERBALIZER = {"+1": ["good"], "-1": ["bad"]}
    
def verbalize(self, label) -> List[str]:
    return self.VERBALIZER[label]       

Importantly, in PET's current version, verbalizers are by default restricted to single tokens in the underlying LMs vocabulary (for using more than one token, see below). Given a language model's tokenizer, you can easily check whether a word corresponds to a single token by verifying that len(tokenizer.tokenize(word)) == 1.

You can also define multiple verbalizations for a single label. For example, if you are unsure which words best represent the labels in a binary sentiment classification task, you could define your verbalizer as follows:

VERBALIZER = {"+1": ["great", "good", "wonderful", "perfect"], "-1": ["bad", "terrible", "horrible"]}

Patterns

Patterns are used to make the language model understand a given task; they must contain exactly one <MASK> token which is to be filled using the verbalizer. For binary sentiment classification based on a review's summary (<A>) and body (<B>), a suitable pattern may be <A>. <B>. Overall, it was <MASK>. Patterns are realized through a PVP's get_parts() method, which returns a pair of text sequences (where each sequence is represented by a list of strings):

def get_parts(self, example: InputExample):
    return [example.text_a, '.', example.text_b, '.'], ['Overall, it was ', self.mask]

If you do not want to use a pair of sequences, you can simply leave the second sequence empty:

def get_parts(self, example: InputExample):
    return [example.text_a, '.', example.text_b, '. Overall, it was ', self.mask], []

If you want to define several patterns, simply use the PVPs pattern_id attribute:

def get_parts(self, example: InputExample):
    if self.pattern_id == 1:
        return [example.text_a, '.', example.text_b, '.'], ['Overall, it was ', self.mask]
    elif self.pattern_id == 2:
        return ['It was just ', self.mask, '!', example.text_a, '.', example.text_b, '.'], []

When training the model using the command line, specify all patterns to be used (e.g., --pattern_ids 1 2).

Importantly, if a sequence is longer than the specified maximum sequence length of the underlying LM, PET must know which parts of the input can be shortened and which ones cannot (for example, the mask token must always be there). Therefore, PVP provides a shortenable() method to indicate that a piece of text can be shortened:

def get_parts(self, example: InputExample):
    text_a = self.shortenable(example.text_a)
    text_b = self.shortenable(example.text_b)
    return [text_a, '.', text_b, '. Overall, it was ', self.mask], []

PET with Multiple Masks

By default, the current implementation of PET and iPET only supports a fixed set of labels that is shared across all examples and verbalizers that correspond to a single token. However, for some tasks it may be necessary to use verbalizers that correspond to multiple tokens (as described here). To do so, you simply need the following two modifications:

  1. Add the following lines in your task's DataProcessor (see examples/custom_task_processor.py):

    from pet.tasks import TASK_HELPERS
    from pet.task_helpers import MultiMaskTaskHelper
    TASK_HELPERS['my_task'] = MultiMaskTaskHelper

    where 'my_task' is the name of your task.

  2. In your PVP, make sure that the get_parts() method always inserts the maximum number of mask tokens required for any verbalization. For example, if your verbalizer maps +1 to "really awesome" and -1 to "terrible" and if those are tokenized as ["really", "awe", "##some"] and ["terrible"], respectively, your get_parts() method should always return a sequence that contains exactly 3 mask tokens.

With this modification, you can now use verbalizers consisting of multiple tokens:

VERBALIZER = {"+1": ["really good"], "-1": ["just bad"]}

However, there are several limitations to consider:

  • When using a MultiMaskTaskHelper, the maximum batch size for evaluation is 1.
  • As using multiple masks requires multiple forward passes during evaluation, the time required for evaluation scales about linearly with the length of the longest verbalizer. If you require verbalizers that consist of 10 or more tokens, using a generative LM might be a better approach.
  • The MultiMaskTaskHelper class is an experimental feature that is not thoroughly tested. In particular, this feature has only been tested for PET and not for iPET. If you observe something strange, please raise an issue.

For more flexibility, you can also write a custom TaskHelper. As a starting point, you can check out the classes CopaTaskHelper, WscTaskHelper and RecordTaskHelper in pet/task_helpers.py.

📕 Citation

If you make use of the code in this repository, please cite the following papers:

@article{schick2020exploiting,
  title={Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference},
  author={Timo Schick and Hinrich Schütze},
  journal={Computing Research Repository},
  volume={arXiv:2001.07676},
  url={http://arxiv.org/abs/2001.07676},
  year={2020}
}

@article{schick2020small,
  title={It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners},
  author={Timo Schick and Hinrich Schütze},
  journal={Computing Research Repository},
  volume={arXiv:2009.07118},
  url={http://arxiv.org/abs/2009.07118},
  year={2020}
}
Owner
Timo Schick
NLP Researcher @ SulzerGmbH , PhD Student @ CIS, LMU Munich
Timo Schick
[NeurIPS 2021] Code for Learning Signal-Agnostic Manifolds of Neural Fields

Learning Signal-Agnostic Manifolds of Neural Fields This is the uncleaned code for the paper Learning Signal-Agnostic Manifolds of Neural Fields. The

60 Dec 12, 2022
A natural language processing model for sequential sentence classification in medical abstracts.

NLP PubMed Medical Research Paper Abstract (Randomized Controlled Trial) A natural language processing model for sequential sentence classification in

Hemanth Chandran 1 Jan 17, 2022
apple's universal binaries BUT MUCH WORSE (PRACTICAL SHITPOST) (NOT PRODUCTION READY)

hyperuniversality investment opportunity: what if we could run multiple architectures in a single file, again apple universal binaries, but worse how

luna 2 Oct 19, 2021
Malware-Related Sentence Classification

Malware-Related Sentence Classification This repo contains the code for the ICTAI 2021 paper "Enrichment of Features for Malware-Related Sentence Clas

Chau Nguyen 1 Mar 26, 2022
Code for the paper "Are Sixteen Heads Really Better than One?"

Are Sixteen Heads Really Better than One? This repository contains code to reproduce the experiments in our paper Are Sixteen Heads Really Better than

Paul Michel 143 Dec 14, 2022
Text-Based zombie apocalyptic decision-making game in Python

Inspiration We shared university first year game coursework.[to gauge previous experience and start brainstorming] Adapted a particular nuclear fallou

Amin Sabbagh 2 Feb 17, 2022
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 02, 2023
Translate - a PyTorch Language Library

NOTE PyTorch Translate is now deprecated, please use fairseq instead. Translate - a PyTorch Language Library Translate is a library for machine transl

775 Dec 24, 2022
DiY Oxygen Concentrator based on the OxiKit

M19O2 DiY Oxygen Concentrator based on / inspired by the OxiKit, OpenOx, Marut, RepRap and Project Apollo platforms. About Read about the project on H

Maker's Asylum 62 Dec 22, 2022
Tensorflow implementation of paper: Learning to Diagnose with LSTM Recurrent Neural Networks.

Multilabel time series classification with LSTM Tensorflow implementation of model discussed in the following paper: Learning to Diagnose with LSTM Re

Aaqib 552 Nov 28, 2022
1 Jun 28, 2022
Indonesia spellchecker with python

indonesia-spellchecker Ganti kata yang terdapat pada file teks.txt untuk diperiksa kebenaran kata. Run on local machine python3 main.py

Rahmat Agung Julians 1 Sep 14, 2022
👄 The most accurate natural language detection library for Python, suitable for long and short text alike

1. What does this library do? Its task is simple: It tells you which language some provided textual data is written in. This is very useful as a prepr

Peter M. Stahl 334 Dec 30, 2022
Large-scale Knowledge Graph Construction with Prompting

Large-scale Knowledge Graph Construction with Prompting across tasks (predictive and generative), and modalities (language, image, vision + language, etc.)

ZJUNLP 161 Dec 28, 2022
Labelling platform for text using distant supervision

With DataQA, you can label unstructured text documents using rule-based distant supervision.

245 Aug 05, 2022
Snowball compiler and stemming algorithms

Snowball is a small string processing language for creating stemming algorithms for use in Information Retrieval, plus a collection of stemming algori

Snowball Stemming language and algorithms 613 Jan 07, 2023
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch

Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoenc

Venelin Valkov 1.8k Dec 31, 2022
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

KoGPT KoGPT (Korean Generative Pre-trained Transformer) https://github.com/kakaobrain/kogpt https://huggingface.co/kakaobrain/kogpt Model Descriptions

Kakao Brain 797 Dec 26, 2022
Fidibo.com comments Sentiment Analyser

Fidibo.com comments Sentiment Analyser Introduction This project first asynchronously grab Fidibo.com books comment data using grabber.py and then sav

Iman Kermani 3 Apr 15, 2022
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021