Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources

Overview

marge

This repository releases the code for Generating Query Focused Summaries from Query-Free Resources.

Please cite the following paper [bib] if you use this code,

Xu, Yumo, and Mirella Lapata. "Generating Query Focused Summaries from Query-Free Resources." In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 6096–6109. 2021.

The availability of large-scale datasets has driven the development of neural models that create generic summaries from single or multiple documents. In this work we consider query focused summarization (QFS), a task for which training data in the form of queries, documents, and summaries is not readily available. We propose to decompose QFS into (1) query modeling (i.e., finding supportive evidence within a set of documents for a query) and (2) conditional language modeling (i.e., summary generation). We introduce MaRGE, a Masked ROUGE Regression framework for evidence estimation and ranking which relies on a unified representation for summaries and queries, so that summaries in generic data can be converted into proxy queries for learning a query model. Experiments across QFS benchmarks and query types show that our model achieves state-of-the-art performance despite learning from weak supervision.

Should you have any query please contact me at [email protected].

Preliminary setup

Project structure

marge
└───requirements.txt
└───README.md
└───log        # logging files
└───run        # scripts for MaRGE training
└───src        # source files
└───data       # generic data for training; qfs data for test/dev
└───graph      # graph components for query expansion
└───model      # MaRGE models for inference
└───rank       # ranking results
└───text       # summarization results
└───unilm_in   # input files to UniLM
└───unilm_out  # output files from UniLM

After cloning this project, use the following command to initialize the structure:

mkdir log data graph model rank text unilm_in unilm_out

Creating environment

cd ..
virtualenv -p python3.6 marge
cd marge
. bin/activate
pip install -r requirements.txt

You need to install apex:

cd ..
git clone https://www.github.com/nvidia/apex
cd apex
python3 setup.py install

Also, you need to setup ROUGE evaluation if you have not yet done it. Please refer to this repository. After finishing the setup, specify the ROUGE path in frame/utils/config_loader.py as an attribute of PathParser:

self.rouge_dir = '~/ROUGE-1.5.5/data'  # specify your ROUGE dir

Preparing benchmark data

Since we are not allowed to distribute DUC clusters and summaries, you can request DUC 2005-2007 from NIST. After acquiring the data, gather each year's clusters and summaries under data/duc_cluster and data/duc_summary, respectively. For instance, DUC 2006's clusters and summaries should be found under data/duc_cluster/2006/ and data/duc_summary/2006/, respectively. For DUC queries: you don't have to prepare queries by yourself; we have put 3 json files for DUC 2005-2007 under data/masked_query, which contain a raw query and a masked query for each cluster. Queries will be fetched from these files at test time.

TD-QFS data can be downloaded from here. You can also use the processed version here.

After data preparation, you should have the following directory structure with the right files under each folder:

marge
└───data
│   └───duc_clusters   # DUC clusters 
│   └───duc_summaries  # DUC reference summaries 
│   └───masked_query   # DUC queries (raw and masked)
│   └───tdqfs          # TD-QFS clusters, queries and reference summaries

MaRGE: query modeling

Preparing training data

Source files for building training data are under src/sripts. For each dataset (Multi-News or CNN/DM), there are three steps create MaRGE training data.

A training sample for Marge can be represented as {sentence, masked summary}->ROUGE(sentence, summary). So we need to get the ROUGE scores for all sentences (step 1) and creating masked summaries (step 2). Then we put them together (step 3).

  1. Calculate ROUGE scores for all sentences:
python src/sripts/dump_sentence_rouge_mp.py
  1. Build masked summaries:
python src/sripts/mask_summary_with_ratio.py
  1. Build train/val/test datasets:
python src/sripts/build_marge_dataset_mn.py

In our experiments, Marge trained on data from Multi-News yielded the best performance in query modeling. If you want to build training data from CNN/DM:

  1. Use the function gathered_mp_dump_sentence_cnndm() in the first step (otherwise, use the function gathered_mp_dump_sentence_mn() )
  2. Set dataset='cnndm' in the second step (otherwise, dataset='mn')
  3. Use build_marge_dataset_cnndm.py instead for the last step

Model training

Depending on which training data you have built, you can run either one of the following two scripts:

. ./run/run_rr_cnndm.sh   # train MaRGE with data from CNN/DM
. ./run/run_rr_mn.sh  # train MaRGE with data from Multi-News

Configs specified in these two files are used in our experiments, but feel free to change them for further experimentation.

Inference and evaluation

Use src/frame/rr/main.py for DUC evaluation and src/frame/rr/main_tdqfs.py for TD-QFS evalaution. We will take DUC evaluation for example.

In src/frame/rr/main.py, run the following methods in order (or at once):

init()
dump_rel_scores()  # inference with MaRGE
rel_scores2rank()  # turn sentence scores to sentence rank
rr_rank2records()  # take top sentences

To evaluate evidence rank, in src/frame/rr/main.py, run:

select_e2e()

MaRGESum: summary generation

Prepare training data from Multi-News

To train a controllable generator, we make the following three changes to the input from Multi-News (and CNN/DM):

  1. Re-order input sentences according to their ROUGE scores, so the top ones will be biased over:
python scripts/selector_for_train.py
  1. Prepend a summary-length token
  2. Prepend a masked summary (UMR-S)

Prepare training data from CNN/DM

Our best generation result is obtained with CNN/DM data. To train MargeSum on CNN/DM data, apart from the above-mentioned three customizations, we need an extra step: build a multi-document version of CNN/DM.

This is mainly because the summaries in the original CNN/DM are fairly short, while testing on QFS requires 250 words as output. To fix this issue, we concatenate summaries from a couple of relevant samples to get a long enough summary. Therefore, the input is now a cluster of the documents from these relevant samples.

This involves in Dr.QA to index all summaries in CNN/DM. After indexing, you can use the following script to cluster samples via retrieving similar summaries:

python scripts/build_cnndm_clusters.py
  • upload the training data, so you can use this multi-document CNN/DM without making it from scratch.

Inference and evaluation

Setting up UniLM environment

To evaluate abstractive summarization, you need to setup an UniLM evironment following the instructions here.

After setting up UnILM, in src/frame/rr/main.py, run:

build_unilm_input(src='rank')

This turns ranked evidence from Marge into MargeSum input files.

Now You can evaluate the trained UniLM model for developement and testing. Go to the UniLM project root, set the correct input directory, and deocode the summaries.

  • add detailed documentation for setting up UniLM.
  • add detailed documentation for decoding.

To evaluate the output, use the following function in src/frame/rr/main.py:

eval_unilm_out()

You can specifiy inference configs in src/frame/rr/rr_config.py.

Owner
Yumo Xu
PhD student @EdinburghNLP.
Yumo Xu
Tensorflow implementation of ID-Unet: Iterative Soft and Hard Deformation for View Synthesis.

ID-Unet: Iterative-view-synthesis(CVPR2021 Oral) Tensorflow implementation of ID-Unet: Iterative Soft and Hard Deformation for View Synthesis. Overvie

17 Aug 23, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
This is Unofficial Repo. Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection (CVPR 2021)

Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection This is a PyTorch implementation of the LipForensics paper. This is an U

Minha Kim 2 May 11, 2022
An Easy-to-use, Modular and Prolongable package of deep-learning based Named Entity Recognition Models.

DeepNER An Easy-to-use, Modular and Prolongable package of deep-learning based Named Entity Recognition Models. This repository contains complex Deep

Derrick 9 May 30, 2022
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target i

NanYoMy 13 Oct 09, 2022
[CVPR2021] De-rendering the World's Revolutionary Artefacts

De-rendering the World's Revolutionary Artefacts Project Page | Video | Paper In CVPR 2021 Shangzhe Wu1,4, Ameesh Makadia4, Jiajun Wu2, Noah Snavely4,

49 Nov 06, 2022
Optimizing Value-at-Risk and Conditional Value-at-Risk of Black Box Functions with Lacing Values (LV)

BayesOpt-LV Optimizing Value-at-Risk and Conditional Value-at-Risk of Black Box Functions with Lacing Values (LV) About This repository contains the s

1 Nov 11, 2021
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Yang Li 12 May 30, 2022
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Website, Tutorials, and Docs    Uncertainty Toolbox A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizatio

Uncertainty Toolbox 1.4k Dec 28, 2022
Sharpened cosine similarity torch - A Sharpened Cosine Similarity layer for PyTorch

Sharpened Cosine Similarity A layer implementation for PyTorch Install At your c

Brandon Rohrer 203 Nov 30, 2022
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
This package is for running the semantic SLAM algorithm using extracted planar surfaces from the received detection

Semantic SLAM This package can perform optimization of pose estimated from VO/VIO methods which tend to drift over time. It uses planar surfaces extra

Hriday Bavle 125 Dec 02, 2022
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

LiDARTag Overview This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds (PDF)(arXiv). This wo

University of Michigan Dynamic Legged Locomotion Robotics Lab 159 Dec 21, 2022
Official PyTorch Implementation for "Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes"

PVDNet: Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes This repository contains the official PyTorch implementatio

Junyong Lee 98 Nov 06, 2022
smc.covid is an R package related to the paper A sequential Monte Carlo approach to estimate a time varying reproduction number in infectious disease models: the COVID-19 case by Storvik et al

smc.covid smc.covid is an R package related to the paper A sequential Monte Carlo approach to estimate a time varying reproduction number in infectiou

0 Oct 15, 2021
Repository for reproducing `Model-Based Robust Deep Learning`

Model-Based Robust Deep Learning (MBRDL) In this repository, we include the code necessary for reproducing the code used in Model-Based Robust Deep Le

Alex Robey 16 Sep 19, 2022
Code for the AI lab course 2021/2022 of the University of Verona

AI-Lab Code for the AI lab course 2021/2022 of the University of Verona Set-Up the environment for the curse Download Anaconda for your System. Instal

Davide Corsi 5 Oct 19, 2022
10x faster matrix and vector operations

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations. If yo

2.3k Jan 09, 2023