Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

Overview

pair-emnlp2020

Official repository for the paper:

Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation

If you find our work useful, please cite:

@inproceedings{hua-wang-2020-pair,
    title = "PAIR: Planning and Iterative Refinement in Pre-trained Transformersfor Long Text Generation",
    author = "Hua, Xinyu  and
      Wang, Lu",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}

Requirements

  • Python 3.7
  • PyTorch 1.4.0
  • PyTorchLightning 0.9.0
  • transformers 3.3.0
  • numpy
  • tqdm
  • pycorenlp (for preprocessing nytimes data)
  • nltk (for preprocessing nytimes data)

Data

We release the data sets in the following link(1.2G uncompressed) Please download and uncompress the file, and put under ./data directory. For opinion and news domains, the The New York Times Annotated Corpus is licensed by LDC. We therefore only provide the ids for train/dev/test. Please follow the instructions to generate the dataset.

Text Planning

To train a BERT planner:

cd planning
python train.py \
    --data-path=../data/ \
    --domain=[arggen,opinion,news] \
    --exp-name=demo \
    --save-interval=1 \ # how frequent to save checkpoints 
    --max-epoch=30 \
    --lr=5e-4 \
    --warmup-updates=5000 \
    --train-set=train \
    --valid-set=dev \
    --tensorboard-logdir=tboard/ \
    --predict-keyphrase-offset \
    --max-samples=32 \ # max number of samples per batch
    [--quiet] \ # whether to print intermediate information

The checkpoints will be dumped to checkpoints/planning/[domain]/[exp-name]. Tensorboard will be available under planning/tboard/.

To run inference using a trained model, with greedy decoding:

cd planning
python decode.py \
    --data-path=../data/ \
    --domain=arggen \
    --test-set=test \
    --max-samples=32 \
    --predict-keyphrase-offset \
    --exp-name=demo \
    [--quiet]

The results will be saved to planning/output/.

Iterative Refinement

We provide implementations for four different setups:

  • Seq2seq: prompt -> tgt
  • KPSeq2seq: prompt + kp-set -> tgt
  • PAIR-light: prompt + kp-plan + masks -> tgt
  • PAIR-full: prompt + kp-plan + template -> tgt

To train a model:

cd refinement
python train.py \
    --domain=[arggen,opinion,news] \
    --setup=[seq2seq,kpseq2seq,pair-light,pair-full] \
    --train-set=train \
    --valid-set=dev \
    --train-batch-size=10 \
    --valid-batch-size=5 \
    --num-train-epochs=20 \
    --ckpt-dir=../checkpoints/[domain]/[setup]/demo \
    --tensorboard-dir=demo \
    [--quiet]

To run iterative refinement:

cd refinement
python generate.py \
    --domain=[arggen,opinion,news] \
    --setup=[seq2seq,kpseq2seq,pair-light,pair-full] \
    --test-set=test \
    --output-name=test_demo \
    --enforce-template-strategy=flexible \
    --do-sampling \
    --sampling-topk=100 \
    --sampling-topp=0.9 \
    --sample-times=3 \
    --ckpt-dir=../checkpoints/[domain]/[setup]/demo

Contact

Xinyu Hua (hua.x [at] northeastern.edu)

License

See the LICENSE file for details.

Owner
Xinyu Hua
PhD student at Northeastern University
Xinyu Hua
Generative Adversarial Text to Image Synthesis

Text To Image Synthesis This is a tensorflow implementation of synthesizing images. The images are synthesized using the GAN-CLS Algorithm from the pa

Hao 575 Jan 08, 2023
Cervix ROI Segmentation Using U-NET

Cervix ROI Segmentation Using U-NET Overview This code illustrate how to segment the ROI in cervical images using U-NET. The ROI here meant to include

Scotty Kwok 35 Sep 14, 2022
An executor that performs image segmentation on fashion items

ClothingSegmenter U2NET fashion image/clothing segmenter based on https://github.com/levindabhi/cloth-segmentation Overview The ClothingSegmenter exec

Jina AI 5 Mar 30, 2022
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
Code for 1st place solution in Sleep AI Challenge SNU Hospital

Sleep AI Challenge SNU Hospital 2021 Code for 1st place solution for Sleep AI Challenge (Note that the code is not fully organized) Refer to the notio

Saewon Yang 13 Jan 03, 2022
StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN Pytorch implementation Inception score evaluation StackGAN-v2-pytorch Tensorflow implementation for reproducing main results in the paper Sta

Han Zhang 1.8k Dec 21, 2022
Encode and decode text application

Text Encoder and Decoder Encode and decode text in many ways using this application! Encode in: ASCII85 Base85 Base64 Base32 Base16 Url MD5 Hash SHA-1

Alice 1 Feb 12, 2022
Official Pytorch implementation of RePOSE (ICCV2021)

RePOSE: Iterative Rendering and Refinement for 6D Object Detection (ICCV2021) [Link] Abstract We present RePOSE, a fast iterative refinement method fo

Shun Iwase 68 Nov 15, 2022
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022
The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

PyCIL: A Python Toolbox for Class-Incremental Learning Introduction • Methods Reproduced • Reproduced Results • How To Use • License • Acknowledgement

Fu-Yun Wang 258 Dec 31, 2022
Motion planning algorithms commonly used on autonomous vehicles. (path planning + path tracking)

Overview This repository implemented some common motion planners used on autonomous vehicles, including Hybrid A* Planner Frenet Optimal Trajectory Hi

Huiming Zhou 1k Jan 09, 2023
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
AgeGuesser: deep learning based age estimation system. Powered by EfficientNet and Yolov5

AgeGuesser AgeGuesser is an end-to-end, deep-learning based Age Estimation system, presented at the CAIP 2021 conference. You can find the related pap

5 Nov 10, 2022
[BMVC'21] Official PyTorch Implementation of Grounded Situation Recognition with Transformers

Grounded Situation Recognition with Transformers Paper | Model Checkpoint This is the official PyTorch implementation of Grounded Situation Recognitio

Junhyeong Cho 18 Jul 19, 2022
Collection of in-progress libraries for entity neural networks.

ENN Incubator Collection of in-progress libraries for entity neural networks: Neural Network Architectures for Structured State Entity Gym: Abstractio

25 Dec 01, 2022
A benchmark for the task of translation suggestion

WeTS: A Benchmark for Translation Suggestion Translation Suggestion (TS), which provides alternatives for specific words or phrases given the entire d

zhyang 55 Dec 24, 2022
"Segmenter: Transformer for Semantic Segmentation" reproduced via mmsegmentation

Segmenter-based-on-OpenMMLab "Segmenter: Transformer for Semantic Segmentation, arxiv 2105.05633." reproduced via mmsegmentation. We reproduce Segment

EricKani 22 Feb 24, 2022
A small library for doing fluid simulation with neural networks.

Neural Fluid Fields This is a small library for doing fluid simulation with neural fields. Check out our review paper, Neural Fields in Visual Computi

Towaki 23 Jun 23, 2022
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 04, 2022