A Distributional Approach To Controlled Text Generation

Related tags

Deep Learninggdc
Overview

A Distributional Approach To Controlled Text Generation

This is the repository code for the ICLR 2021 paper "A Distributional Approach to Controlled Text Generation". The code in this repo should help reproduce all the experiments and results in the paper.

Installation

pip install -r requirements.txt

Code Guide and Examples

  • package gdc/: contains all trainer classes.
  • folder examples/: Implements the training loop for pointwise (run.py) and distributional & hybrid (run-distributional.py) experiments.
  • folder configs/: Contains template configurations for all types of experiments.

Configuration Files

We use json configuration files to pass all training parameters including the contraints type and specifications. Here are the most important config parameters (the rest are self-explanatory):

  • trainer_class: Depending on which type of costraint you want, use GDCTrainer for distributional and PointwiseGDCTrainer for pointwise constraints. Other trainers exist for baselines (see examples below).
  • lm_name: name of the language model you want to start with as on transformers hub.
  • ref_lm_name name of the reference policy language model (proposal used for importance sampling) as on transformers hub.
  • tk_name: tokenizer name.
  • scorers: this is the most important parameter which is used to define your constraints. You can view each constraint as a scorer function that takes a collection of samples and returns an equivalent number of values representing the degree of constraint satisfaction in each sample. Scorer is passed a list of json objects, each of which contains the following:
    • name: name of the constraint.
    • config: another json object with the following keys:
      • scorer_type: The type of constraints. Possible types include single_word, wordlist, wikibio-wordlist, model, and gender.
      • scorer_attribute: Depending on the scorer type, this defines what exactly do you want to control for that given type. (See below for a tutorial on building your own scorer).
  • desired_moments: this is specially for distributional constraints and it defines the required moments (feature means) that you want to achieve. Note that for pointwise constraints you must set your desired moment to 1.0.
  • moment_matching_sample_size: this defines the number of samples used for moment matching (or lambda learning). See section 2.2 in the paper.
  • eval_top_p: During training, we evaluate the model by sampling from it. This defines the nucleus sampling top_p value used for evaluation.
  • q_update_interval: Number of update steps after which we check if pi is better than q, and update q.
  • q_update_criterion: Criterion used to decide whether pi is improving or not. Options are KL-Divergence (used in the paper), or Total Variation Distance.
  • eval_interval: Number of updates after which to evaluate the model i.e sample with nucleus sampling and compute different quality metrics on the generations.

Pointwise Constraints

In the case of solely pointwise constraints, the EBM could be constructed directly as P(x) = a(x) . b(x) , where b(x) is a binary value indicating if the pointwise constraint is met or not for a specific sequence x. Therefore, calculations of the λ in the EBM is not necessary, we provide an optimized implementation for this using the PointwiseGDCTrainer.

  • Single words
# Fine tune GPT-2 on a single word constraint inside the 
#   "trainer_class": "PointwiseGDCTrainer",
# Single word = "amazing" pointwise constraint  
#    inside word.json
#    "trainer_class":"PointwiseGDCTrainer",
#    "scorer_type": "single_word",
#    "scorer_attribute": "amazing", (try it! replace "amazing" with any word)

python run.py --config ../configs/gdc/pointwise/word.json
  • Word lists
# Fine tune GPT-2 using on a word-list pointwise constraint
# inside wordlist.json:
#    "trainer_class":"PointwiseGDCTrainer",
#    "scorer_type": "wordlist",
#    "scorer_attribute": "politics",  (try it! replace with any filename in ./gdc/resources/wordlists/

python run.py --config ../configs/gdc/pointwise/wordlist.json
  • Discriminators
#    "trainer_class":"PointwiseGDCTrainer",
# Use a pretrained sentiment classifier (class id = 0 or 2) as a pointwise constraint 
#    "scorer_type": "model",
#    "scorer_attribute": "sentiment",
#    "class_index": [0,2], # class idx: 0 positive, 1 negative, 2 very positive, 3 very negative

python run.py --config ../configs/gdc/pointwise/discriminator.json

Distributional and Hybrid Constraints

  • Single Distributional Constraint
# inside the config file single-distributional.json
# this is how to define scorers and assign them the desired moments
#    "scorers":[
#        {"name": "female", "config":{"scorer_type": "gender", "scorer_attribute": "female"}}
#    ],
#    "desired_moments": {"female":0.50},
#    "trainer_class":"GDCTrainer",


python run-distributional.py --config ../configs/distributional/single-distributional.json

  • Multiple Distributional Constraints
# inside multiple-distributional.json config file
# add four wordlist constraints with different desired moments
#    "scorers":[
#        {"name": "science", "config":{"scorer_type": "wikibio-wordlist", "scorer_attribute":"science"}},
#        {"name": "art", "config":{"scorer_type": "wikibio-wordlist", "scorer_attribute": "art"}},
#        {"name": "sports", "config":{"scorer_type": "wikibio-wordlist", "scorer_attribute": "sports"},
#        {"name": "business", "config":{"scorer_type": "wikibio-wordlist", "scorer_attribute": "business"}}
#    ],
#    "desired_moments": {"science":0.4, "art":0.4, "business":0.10, "sports":0.10},
#    "trainer_class":"GDCTrainer",


python run-distributional.py --config ../configs/distributional/multiple-distributional.json
  • Hybrid constraints (pointwise + distributional)
# inside hybrid.json config file here is how to combine pointwise and distributional constraints
# when the desired moment 1.0 it becomes a pointwise constraint while 0.5 is distributional
#    "scorers":[
#        {"name": "female", "config":{ "scorer_type": "gender", "scorer_attribute": "female"}}, 
#        {"name": "sports", "config": {"scorer_type":"wikibio-wordlist", "scorer_attribute": "sports"}}
#    ],
#    "desired_moments": {"female":0.5, "sports": 1.0},
#    "trainer_class":"GDCTrainer",

python run-distributional.py --config ../configs/distributional/hybrid.json

Baselines

We implement three reinforcement learning baselines. Note that RL baselines are only suitable with Pointwise constraints, here are some examples how to run them for some pointwise tasks:

  • REINFORCE (Williams, 1992b) using the reward φ(x) as a reward signal.
# Fine tune GPT-2 using on a word list constraint
# inside REINFORCE.json those options are set to make allow this to happen
#    "trainer_class": "PGTrainer"   (PG -> Policy gradient)
#    "scorer_type": "wordlist",
#    "scorer_attribute": "politics",
python run.py --config ../configs/reinforce/REINIFORCE.json
  • REINFORCE_P(x) Reinforce again with the EBM P as a reward signal.
# Fine tune GPT-2 on a single word constraint
# inside REINFORCE_Px.json those options are set to make allow this to happen
# these two options below are activating REINFORCE_P(x) trainer baseline
#   "trainer_class": "PGTrainer",
#   "use_P_as_reward": true,    (this option works with PGTrainer to the EBM P)

# Single word = "amazing" pointwise constraint (try it! replace "amazing" with any word) 
#    "scorer_type": "single_word",
#    "scorer_attribute": "amazing",

python run.py --config ../configs/reinforce/REINIFORCE_Px.json
  • ZIEGLER (Ziegler et al., 2019): Proximal Policy Optimization (PPO) algorithm with φ(x) as a reward signal in addition to a KL penalty penalizing divergences from the original LM.
# Fine tune GPT-2 on a single word constraint
# inside PPO.json
#   "trainer_class": "PPOTrainer",

# use a pretrained sentiment classifier (class id = 0 or 2) as a pointwise constraint 
#    "scorer_type": "model",
#    "scorer_attribute": "sentiment",
#    "class_index": [0,2], # class idx: 0 positive, 1 negative, 2 very postive, 3 very negative

python run.py --config ../configs/ppo/PPO.json

How Do I Define My Own Constraint?

Let's say you have a another kind of constraint different from the ones existing. Let's say you're not very passionate about the letter "z", so you want only 20% of the generated text to contain the letter "z". Clearly, this is a distributional constraint.

Step 1: Build you Scorer Function.

The first step is to go to gdc/scorer.py and in get_scoring_fn(), you add another if branch (obviously with more scorers, this should be done in a more elegant way):

elif self.config['scorer_type'] == 'single_letter`:
   
   def scoring_fn(samples):
      # code that checks for the existence of a certain generic letter.
      # the letter should be passed in self.config['scorer_attribute']
      # return [1 if a sample containts the letter, otherwise 0 for all samples]
      

You can also add any code that your scorer would need in the init() function.

Step 2: Set up your Configs

As you only have a single distributional constraint. you can clone gdc/configs/distributional/single-distributional.json and edit the following to add your "z" letter constraint.

 "scorers":[
        {"name": "z_20", "config":{"scorer_type": "single_letter", "scorer_attribute":"z"}}
        ]
 "desired_moments": {"z_20":0.20}, 
 ....

then just pass the new config json to run-distributional.py as shown above, and you are good to go!

Contributors

Authors of this work have contributed equally to this project and its affiliated publication. Muhammad Khalifa has performed this work during his research internship at Naver Labs Europe.

Muhammad Khalifa, [email protected]

Hady Elsahar, [email protected]

Marc Dymetman, [email protected]

Citation

@inproceedings{
    CNTRL_NLG_ICLR2021,
    title={A Distributional Approach to Controlled Text Generation},
    author={Muhammad Khalifa and Hady Elsahar and Marc Dymetman},
    booktitle={International Conference on Learning Representations},
    year={2021},
    url={https://openreview.net/forum?id=jWkw45-9AbL}
}
Owner
NAVER
NAVER
Reading list for research topics in Masked Image Modeling

awesome-MIM Reading list for research topics in Masked Image Modeling(MIM). We list the most popular methods for MIM, if I missed something, please su

ligang 231 Dec 07, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
Source-to-Source Debuggable Derivatives in Pure Python

Tangent Tangent is a new, free, and open-source Python library for automatic differentiation. Existing libraries implement automatic differentiation b

Google 2.2k Jan 01, 2023
Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"

Output Diversified Sampling (ODS) This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for W

50 Dec 11, 2022
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

Lens by Credo AI - Responsible AI Assessment Framework Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data a

Credo AI 27 Dec 14, 2022
Replication of Pix2Seq with Pretrained Model

Pretrained-Pix2Seq We provide the pre-trained model of Pix2Seq. This version contains new data augmentation. The model is trained for 300 epochs and c

peng gao 51 Nov 22, 2022
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Onur Çopur 12 Nov 11, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
The source code for Adaptive Kernel Graph Neural Network at AAAI2022

AKGNN The source code for Adaptive Kernel Graph Neural Network at AAAI2022. Please cite our paper if you think our work is helpful to you: @inproceedi

11 Nov 25, 2022
Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics

Interaction-Network-Pytorch Pytorch Implementraion of Interaction Networks for Learning about Objects, Relations and Physics. Interaction Network is a

117 Nov 05, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 06, 2022
PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).

PFENet This is the implementation of our paper PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation that has been accepted to IEE

DV Lab 230 Dec 31, 2022
Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)

Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT

Emirhan BULUT 102 Nov 18, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
particle tracking model, works with the ROMS output file(qck.nc, his.nc)

particle-tracking-model-for-ROMS particle tracking model, works with the ROMS output file(qck.nc, his.nc) description this is a 2-dimensional particle

xusheng 1 Jan 11, 2022
Balancing Principle for Unsupervised Domain Adaptation

Blancing Principle for Domain Adaptation NeurIPS 2021 Paper Abstract We address the unsolved algorithm design problem of choosing a justified regulari

Marius-Constantin Dinu 4 Dec 15, 2022
Reinfore learning tool box, contains trpo, a3c algorithm for continous action space

RL_toolbox all the algorithm is running on pycharm IDE, or the package loss error may exist. implemented algorithm: trpo a3c a3c:for continous action

yupei.wu 44 Oct 10, 2022
Seq2seq - Sequence to Sequence Learning with Keras

Seq2seq Sequence to Sequence Learning with Keras Hi! You have just found Seq2Seq. Seq2Seq is a sequence to sequence learning add-on for the python dee

Fariz Rahman 3.1k Dec 18, 2022
This repository is a basic Machine Learning train & validation Template (Using PyTorch)

pytorch_ml_template This repository is a basic Machine Learning train & validation Template (Using PyTorch) TODO Markdown 사용법 Build Docker 사용법 Anacond

1 Sep 15, 2022