Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

Related tags

Deep LearningDDAMS
Overview

DDAMS

This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Preprint].

Requirements

  • We use Conda python 3.7 and strongly recommend that you create a new environment: conda create -n ddams python=3.7.
  • Run the following command: pip install -r requirements.txt.

Data

You can download data here, put the data under the project dir DDAMS/data/xxx.

  • data/ami
    • data/ami/ami: preprocessed meeting data
    • data/ami/ami_qg: pseudo summarization data.
    • data/ami/ami_reference: golden reference for test file.
  • data/icsi
    • data/icsi/icsi: preprocessed meeting data
    • data/icsi/icsi_qg: pseudo summarization data.
    • data/icsi/icsi_reference: golden reference for test file.
  • data/glove: pre-trained word embedding glove.6B.300d.txt.

Reproduce Results

You can follow the following steps to reproduce the best results in our paper.

download checkpoints

Download checkpoints here. Put the checkpoints, including AMI.pt and ICSI.pt, under the project dir DDAMS/models/xx.pt.

translate

Produce final summaries.

For AMI, we can get summaries/ami_summary.txt.

CUDA_VISIBLE_DEVICES=X python translate.py -batch_size 1 \
               -src data/ami/ami/test.src \
               -tgt data/ami/ami/test.tgt \
               -seg data/ami/ami/test.seg \
               -speaker data/ami/ami/test.speaker \
               -relation data/ami/ami/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model models/AMI.pt \
               -output summaries/ami_summary.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 280 \
               -max_length 450

For ICSI, we can get summaries/icsi_summary.txt.

CUDA_VISIBLE_DEVICES=x python translate.py -batch_size 1 \
               -src data/icsi/icsi/test.src \
               -seg data/icsi/icsi/test.seg \
               -speaker data/icsi/icsi/test.speaker \
               -relation data/icsi/icsi/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model models/ICSI.pt \
               -output summaries/icsi_summary.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 400 \
               -max_length 550

remove tags

<t> and </t> will raise errors for ROUGE test. So we should first remove them. (following OpenNMT)

sed -i 's/ <\/t>//g' summaries/ami_summary.txt
sed -i 's/<t> //g' summaries/ami_summary.txt
sed -i 's/ <\/t>//g' summaries/icsi_summary.txt
sed -i 's/<t> //g' summaries/icsi_summary.txt

test rouge score

  • Change pyrouge.Rouge155() to your local path.

Output format >> ROUGE(1/2/L): xx.xx-xx.xx-xx.xx

python test_rouge.py -c summaries/ami_summary.txt
python test_rouge_icsi.py -c summaries/icsi_summary.txt

ROUGE score

You will get following ROUGE scores.

ROUGE-1 ROUGE-2 ROUGE-L
AMI 53.15 22.32 25.67
ICSI 40.41 11.02 19.18

From Scratch

For AMI

Preprocess

(1) Preprocess AMI dataset.

python preprocess.py -train_src data/ami/ami/train.src \
                     -train_tgt data/ami/ami/train.tgt \
                     -train_seg data/ami/ami/train.seg \
                     -train_speaker data/ami/ami/train.speaker \
                     -train_relation data/ami/ami/train.relation \
                     -valid_src data/ami/ami/valid.src \
                     -valid_tgt data/ami/ami/valid.tgt \
                     -valid_seg data/ami/ami/valid.seg \
                     -valid_speaker data/ami/ami/valid.speaker \
                     -valid_relation data/ami/ami/valid.relation \
                     -save_data data/ami/AMI \
                     -dynamic_dict \
                     -share_vocab \
                     -lower \
                     -overwrite

(2) Create pre-trained word embeddings.

python embeddings_to_torch.py -emb_file_both data/glove/glove.6B.300d.txt \
-dict_file data/ami/AMI.vocab.pt \
-output_file data/ami/ami_embeddings

(3) Preprocess pseudo summarization dataset.

python preprocess.py -train_src data/ami/ami_qg/train.src \
                     -train_tgt data/ami/ami_qg/train.tgt \
                     -train_seg data/ami/ami_qg/train.seg \
                     -train_speaker data/ami/ami_qg/train.speaker \
                     -train_relation data/ami/ami_qg/train.relation \
                     -save_data data/ami/AMIQG \
                     -lower \
                     -overwrite \
                     -shard_size 500 \
                     -dynamic_dict \
                     -share_vocab

Train

(1) we first pre-train our DDAMS on the pseudo summarization dataset.

  • run the following command to save config file (-save_config).
  • remove -save_config and rerun the command to start the training process.
CUDA_VISIBLE_DEVICES=X python train.py -save_model ami_qg_pretrain/AMI_qg\
           -data data/ami/AMIQG \
           -speaker_type ami \
           -batch_size 64 \
           -learning_rate 0.001 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -report_every 30 \
           -encoder_type hier3 \
           -global_attention general \
           -save_checkpoint_steps 500 \
           -start_decay_steps 1500 \
           -pre_word_vecs_enc data/ami/ami_embeddings.enc.pt \
           -pre_word_vecs_dec data/ami/ami_embeddings.dec.pt \
           -log_file logs/ami_qg_pretrain.txt \
           -save_config logs/ami_qg_pretrain.txt

(2) fine-tuning on AMI.

CUDA_VISIBLE_DEVICES=X python train.py -save_model ami_final/AMI \
           -data data/ami/AMI \
           -speaker_type ami \
           -train_from ami_qg_pretrain/xxx.pt  \
           -reset_optim all \
           -batch_size 1 \
           -learning_rate 0.0005 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -encoder_type hier3 \
           -global_attention general \
           -dropout 0.5 \
           -attention_dropout 0.5 \
           -start_decay_steps 500 \
           -decay_steps 500 \
           -log_file logs/ami_final.txt \
           -save_config logs/ami_final.txt

Translate

CUDA_VISIBLE_DEVICES=X python translate.py -batch_size 1 \
               -src data/ami/ami/test.src \
               -tgt data/ami/ami/test.tgt \
               -seg data/ami/ami/test.seg \
               -speaker data/ami/ami/test.speaker \
               -relation data/ami/ami/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model xxx.pt \
               -output xxx.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 280 \
               -max_length 450

For ICSI

Preprocess

(1) Preprocess ICSI dataset.

python preprocess.py -train_src data/icsi/icsi/train.src \
                     -train_tgt data/icsi/icsi/train.tgt \
                     -train_seg data/icsi/icsi/train.seg \
                     -train_speaker data/icsi/icsi/train.speaker \
                     -train_relation data/icsi/icsi/train.relation \
                     -valid_src data/icsi/icsi/valid.src \
                     -valid_tgt data/icsi/icsi/valid.tgt \
                     -valid_seg data/icsi/icsi/valid.seg \
                     -valid_speaker data/icsi/icsi/valid.speaker \
                     -valid_relation data/icsi/icsi/valid.relation \
                     -save_data data/icsi/ICSI \
                     -src_seq_length 20000 \
                     -src_seq_length_trunc 20000 \
                     -tgt_seq_length 700 \
                     -tgt_seq_length_trunc 700 \
                     -dynamic_dict \
                     -share_vocab \
                     -lower \
                     -overwrite

(2) Create pre-trained word embeddings.

python embeddings_to_torch.py -emb_file_both data/glove/glove.6B.300d.txt \
-dict_file data/icsi/ICSI.vocab.pt \
-output_file data/icsi/icsi_embeddings

(3) Preprocess pseudo summarization dataset.

python preprocess.py -train_src data/icsi/icsi_qg/train.src \
                     -train_tgt data/icsi/icsi_qg/train.tgt \
                     -train_seg data/icsi/icsi_qg/train.seg \
                     -train_speaker data/icsi/icsi_qg/train.speaker \
                     -train_relation data/icsi/icsi_qg/train.relation \
                     -save_data data/icsi/ICSIQG \
                     -lower \
                     -overwrite \
                     -shard_size 500 \
                     -dynamic_dict \
                     -share_vocab

Train

(1) pre-training.

CUDA_VISIBLE_DEVICES=X python train.py -save_model icsi_qg_pretrain/ICSI \
           -data data/icsi/ICSIQG \
           -speaker_type icsi \
           -batch_size 64 \
           -learning_rate 0.001 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -report_every 30 \
           -encoder_type hier3 \
           -global_attention general \
           -save_checkpoint_steps 500 \
           -start_decay_steps 1500 \
           -pre_word_vecs_enc data/icsi/icsi_embeddings.enc.pt \
           -pre_word_vecs_dec data/icsi/icsi_embeddings.dec.pt \
           -log_file logs/icsi_qg_pretrain.txt \
           -save_config logs/icsi_qg_pretrain.txt

(2) fine-tuning on ICSI.

CUDA_VISIBLE_DEVICES=X python train.py -save_model icsi_final/ICSI \
           -data data/icsi/ICSI \
           -speaker_type icsi \
           -train_from icsi_qg_pretrain/xxx.pt  \
           -reset_optim all \
           -batch_size 1 \
           -learning_rate 0.0005 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -encoder_type hier3 \
           -global_attention general \
           -dropout 0.5 \
           -attention_dropout 0.5 \
           -start_decay_steps 1000 \
           -decay_steps 100 \
           -save_checkpoint_steps 50 \
           -valid_steps 50 \
           -log_file logs/icsi_final.txt \
           -save_config logs/icsi_final.txt

Translate

CUDA_VISIBLE_DEVICES=x python translate.py -batch_size 1 \
               -src data/icsi/icsi/test.src \
               -seg data/icsi/icsi/test.seg \
               -speaker data/icsi/icsi/test.speaker \
               -relation data/icsi/icsi/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model xxx.pt \
               -output xxx.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 400 \
               -max_length 550

Test Rouge

(1) Before ROUGE test, we should first remove special tags: .

sed -i 's/ <\/t>//g' xxx.txt
sed -i 's/<t> //g' xxx.txt

(2) Test rouge

python test_rouge.py -c summaries/xxx.txt
python test_rouge_icsi.py -c summaries/xxx.txt
Owner
xcfeng
Ph.D. candidate working on Summarization.
xcfeng
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
Curating a dataset for bioimage transfer learning

CytoImageNet A large-scale pretraining dataset for bioimage transfer learning. Motivation In past few decades, the increase in speed of data collectio

Stanley Z. Hua 9 Jun 20, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
CVPR2021 Workshop - HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization.

HDRUNet [Paper Link] HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization By Xiangyu Chen, Yihao Liu, Zhengwen Zhang, Yu Qiao an

XyChen 105 Dec 20, 2022
CVPRW 2021: How to calibrate your event camera

E2Calib: How to Calibrate Your Event Camera This repository contains code that implements video reconstruction from event data for calibration as desc

Robotics and Perception Group 104 Nov 16, 2022
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
A configurable, tunable, and reproducible library for CTR prediction

FuxiCTR This repo is the community dev version of the official release at huawei-noah/benchmark/FuxiCTR. Click-through rate (CTR) prediction is an cri

XUEPAI 397 Dec 30, 2022
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023
Download from Onlyfans.com.

OnlySave: Onlyfans downloader Getting Started: Download the setup executable from the latest release. Install and run. Only works on Windows currently

4 May 30, 2022
Python Single Object Tracking Evaluation

pysot-toolkit The purpose of this repo is to provide evaluation API of Current Single Object Tracking Dataset, including VOT2016 VOT2018 VOT2018-LT OT

348 Dec 22, 2022
A fast implementation of bss_eval metrics for blind source separation

fast_bss_eval Do you have a zillion BSS audio files to process and it is taking days ? Is your simulation never ending ? Fear no more! fast_bss_eval i

Robin Scheibler 99 Dec 13, 2022
A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

SGCN ⠀ A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018). Abstract Due to the fact much of today's data can be represented as

Benedek Rozemberczki 251 Nov 30, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
Local Attention - Flax module for Jax

Local Attention - Flax Autoregressive Local Attention - Flax module for Jax Install $ pip install local-attention-flax Usage from jax import random fr

Phil Wang 16 Jun 16, 2022
Hierarchical Time Series Forecasting with a familiar API

scikit-hts Hierarchical Time Series with a familiar API. This is the result from not having found any good implementations of HTS on-line, and my work

Carlo Mazzaferro 204 Dec 17, 2022
SimpleDepthEstimation - An unified codebase for NN-based monocular depth estimation methods

SimpleDepthEstimation Introduction This is an unified codebase for NN-based monocular depth estimation methods, the framework is based on detectron2 (

8 Dec 13, 2022
Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Claims.

MTM This is the official repository of the paper: Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Cla

ICTMCG 13 Sep 17, 2022