Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

Related tags

Deep LearningDDAMS
Overview

DDAMS

This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Preprint].

Requirements

  • We use Conda python 3.7 and strongly recommend that you create a new environment: conda create -n ddams python=3.7.
  • Run the following command: pip install -r requirements.txt.

Data

You can download data here, put the data under the project dir DDAMS/data/xxx.

  • data/ami
    • data/ami/ami: preprocessed meeting data
    • data/ami/ami_qg: pseudo summarization data.
    • data/ami/ami_reference: golden reference for test file.
  • data/icsi
    • data/icsi/icsi: preprocessed meeting data
    • data/icsi/icsi_qg: pseudo summarization data.
    • data/icsi/icsi_reference: golden reference for test file.
  • data/glove: pre-trained word embedding glove.6B.300d.txt.

Reproduce Results

You can follow the following steps to reproduce the best results in our paper.

download checkpoints

Download checkpoints here. Put the checkpoints, including AMI.pt and ICSI.pt, under the project dir DDAMS/models/xx.pt.

translate

Produce final summaries.

For AMI, we can get summaries/ami_summary.txt.

CUDA_VISIBLE_DEVICES=X python translate.py -batch_size 1 \
               -src data/ami/ami/test.src \
               -tgt data/ami/ami/test.tgt \
               -seg data/ami/ami/test.seg \
               -speaker data/ami/ami/test.speaker \
               -relation data/ami/ami/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model models/AMI.pt \
               -output summaries/ami_summary.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 280 \
               -max_length 450

For ICSI, we can get summaries/icsi_summary.txt.

CUDA_VISIBLE_DEVICES=x python translate.py -batch_size 1 \
               -src data/icsi/icsi/test.src \
               -seg data/icsi/icsi/test.seg \
               -speaker data/icsi/icsi/test.speaker \
               -relation data/icsi/icsi/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model models/ICSI.pt \
               -output summaries/icsi_summary.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 400 \
               -max_length 550

remove tags

<t> and </t> will raise errors for ROUGE test. So we should first remove them. (following OpenNMT)

sed -i 's/ <\/t>//g' summaries/ami_summary.txt
sed -i 's/<t> //g' summaries/ami_summary.txt
sed -i 's/ <\/t>//g' summaries/icsi_summary.txt
sed -i 's/<t> //g' summaries/icsi_summary.txt

test rouge score

  • Change pyrouge.Rouge155() to your local path.

Output format >> ROUGE(1/2/L): xx.xx-xx.xx-xx.xx

python test_rouge.py -c summaries/ami_summary.txt
python test_rouge_icsi.py -c summaries/icsi_summary.txt

ROUGE score

You will get following ROUGE scores.

ROUGE-1 ROUGE-2 ROUGE-L
AMI 53.15 22.32 25.67
ICSI 40.41 11.02 19.18

From Scratch

For AMI

Preprocess

(1) Preprocess AMI dataset.

python preprocess.py -train_src data/ami/ami/train.src \
                     -train_tgt data/ami/ami/train.tgt \
                     -train_seg data/ami/ami/train.seg \
                     -train_speaker data/ami/ami/train.speaker \
                     -train_relation data/ami/ami/train.relation \
                     -valid_src data/ami/ami/valid.src \
                     -valid_tgt data/ami/ami/valid.tgt \
                     -valid_seg data/ami/ami/valid.seg \
                     -valid_speaker data/ami/ami/valid.speaker \
                     -valid_relation data/ami/ami/valid.relation \
                     -save_data data/ami/AMI \
                     -dynamic_dict \
                     -share_vocab \
                     -lower \
                     -overwrite

(2) Create pre-trained word embeddings.

python embeddings_to_torch.py -emb_file_both data/glove/glove.6B.300d.txt \
-dict_file data/ami/AMI.vocab.pt \
-output_file data/ami/ami_embeddings

(3) Preprocess pseudo summarization dataset.

python preprocess.py -train_src data/ami/ami_qg/train.src \
                     -train_tgt data/ami/ami_qg/train.tgt \
                     -train_seg data/ami/ami_qg/train.seg \
                     -train_speaker data/ami/ami_qg/train.speaker \
                     -train_relation data/ami/ami_qg/train.relation \
                     -save_data data/ami/AMIQG \
                     -lower \
                     -overwrite \
                     -shard_size 500 \
                     -dynamic_dict \
                     -share_vocab

Train

(1) we first pre-train our DDAMS on the pseudo summarization dataset.

  • run the following command to save config file (-save_config).
  • remove -save_config and rerun the command to start the training process.
CUDA_VISIBLE_DEVICES=X python train.py -save_model ami_qg_pretrain/AMI_qg\
           -data data/ami/AMIQG \
           -speaker_type ami \
           -batch_size 64 \
           -learning_rate 0.001 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -report_every 30 \
           -encoder_type hier3 \
           -global_attention general \
           -save_checkpoint_steps 500 \
           -start_decay_steps 1500 \
           -pre_word_vecs_enc data/ami/ami_embeddings.enc.pt \
           -pre_word_vecs_dec data/ami/ami_embeddings.dec.pt \
           -log_file logs/ami_qg_pretrain.txt \
           -save_config logs/ami_qg_pretrain.txt

(2) fine-tuning on AMI.

CUDA_VISIBLE_DEVICES=X python train.py -save_model ami_final/AMI \
           -data data/ami/AMI \
           -speaker_type ami \
           -train_from ami_qg_pretrain/xxx.pt  \
           -reset_optim all \
           -batch_size 1 \
           -learning_rate 0.0005 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -encoder_type hier3 \
           -global_attention general \
           -dropout 0.5 \
           -attention_dropout 0.5 \
           -start_decay_steps 500 \
           -decay_steps 500 \
           -log_file logs/ami_final.txt \
           -save_config logs/ami_final.txt

Translate

CUDA_VISIBLE_DEVICES=X python translate.py -batch_size 1 \
               -src data/ami/ami/test.src \
               -tgt data/ami/ami/test.tgt \
               -seg data/ami/ami/test.seg \
               -speaker data/ami/ami/test.speaker \
               -relation data/ami/ami/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model xxx.pt \
               -output xxx.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 280 \
               -max_length 450

For ICSI

Preprocess

(1) Preprocess ICSI dataset.

python preprocess.py -train_src data/icsi/icsi/train.src \
                     -train_tgt data/icsi/icsi/train.tgt \
                     -train_seg data/icsi/icsi/train.seg \
                     -train_speaker data/icsi/icsi/train.speaker \
                     -train_relation data/icsi/icsi/train.relation \
                     -valid_src data/icsi/icsi/valid.src \
                     -valid_tgt data/icsi/icsi/valid.tgt \
                     -valid_seg data/icsi/icsi/valid.seg \
                     -valid_speaker data/icsi/icsi/valid.speaker \
                     -valid_relation data/icsi/icsi/valid.relation \
                     -save_data data/icsi/ICSI \
                     -src_seq_length 20000 \
                     -src_seq_length_trunc 20000 \
                     -tgt_seq_length 700 \
                     -tgt_seq_length_trunc 700 \
                     -dynamic_dict \
                     -share_vocab \
                     -lower \
                     -overwrite

(2) Create pre-trained word embeddings.

python embeddings_to_torch.py -emb_file_both data/glove/glove.6B.300d.txt \
-dict_file data/icsi/ICSI.vocab.pt \
-output_file data/icsi/icsi_embeddings

(3) Preprocess pseudo summarization dataset.

python preprocess.py -train_src data/icsi/icsi_qg/train.src \
                     -train_tgt data/icsi/icsi_qg/train.tgt \
                     -train_seg data/icsi/icsi_qg/train.seg \
                     -train_speaker data/icsi/icsi_qg/train.speaker \
                     -train_relation data/icsi/icsi_qg/train.relation \
                     -save_data data/icsi/ICSIQG \
                     -lower \
                     -overwrite \
                     -shard_size 500 \
                     -dynamic_dict \
                     -share_vocab

Train

(1) pre-training.

CUDA_VISIBLE_DEVICES=X python train.py -save_model icsi_qg_pretrain/ICSI \
           -data data/icsi/ICSIQG \
           -speaker_type icsi \
           -batch_size 64 \
           -learning_rate 0.001 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -report_every 30 \
           -encoder_type hier3 \
           -global_attention general \
           -save_checkpoint_steps 500 \
           -start_decay_steps 1500 \
           -pre_word_vecs_enc data/icsi/icsi_embeddings.enc.pt \
           -pre_word_vecs_dec data/icsi/icsi_embeddings.dec.pt \
           -log_file logs/icsi_qg_pretrain.txt \
           -save_config logs/icsi_qg_pretrain.txt

(2) fine-tuning on ICSI.

CUDA_VISIBLE_DEVICES=X python train.py -save_model icsi_final/ICSI \
           -data data/icsi/ICSI \
           -speaker_type icsi \
           -train_from icsi_qg_pretrain/xxx.pt  \
           -reset_optim all \
           -batch_size 1 \
           -learning_rate 0.0005 \
           -share_embeddings \
           -share_decoder_embeddings \
           -copy_attn \
           -reuse_copy_attn \
           -encoder_type hier3 \
           -global_attention general \
           -dropout 0.5 \
           -attention_dropout 0.5 \
           -start_decay_steps 1000 \
           -decay_steps 100 \
           -save_checkpoint_steps 50 \
           -valid_steps 50 \
           -log_file logs/icsi_final.txt \
           -save_config logs/icsi_final.txt

Translate

CUDA_VISIBLE_DEVICES=x python translate.py -batch_size 1 \
               -src data/icsi/icsi/test.src \
               -seg data/icsi/icsi/test.seg \
               -speaker data/icsi/icsi/test.speaker \
               -relation data/icsi/icsi/test.relation \
               -beam_size 10 \
               -share_vocab \
               -dynamic_dict \
               -replace_unk \
               -model xxx.pt \
               -output xxx.txt \
               -block_ngram_repeat 3 \
               -gpu 0 \
               -min_length 400 \
               -max_length 550

Test Rouge

(1) Before ROUGE test, we should first remove special tags: .

sed -i 's/ <\/t>//g' xxx.txt
sed -i 's/<t> //g' xxx.txt

(2) Test rouge

python test_rouge.py -c summaries/xxx.txt
python test_rouge_icsi.py -c summaries/xxx.txt
Owner
xcfeng
Ph.D. candidate working on Summarization.
xcfeng
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
CoRe: Contrastive Recurrent State-Space Models

CoRe: Contrastive Recurrent State-Space Models This code implements the CoRe model and reproduces experimental results found in Robust Robotic Control

Apple 21 Aug 11, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
免费获取http代理并生成proxifier配置文件

freeproxy 免费获取http代理并生成proxifier配置文件 公众号:台下言书 工具说明:https://mp.weixin.qq.com/s?__biz=MzIyNDkwNjQ5Ng==&mid=2247484425&idx=1&sn=56ccbe130822aa35038095317

说书人 32 Mar 25, 2022
Efficient Two-Step Networks for Temporal Action Segmentation (Neurocomputing 2021)

Efficient Two-Step Networks for Temporal Action Segmentation This repository provides a PyTorch implementation of the paper Efficient Two-Step Network

8 Apr 16, 2022
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations This is the repository for the paper Consumer Fairness in Recomm

7 Nov 30, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transf

SenseTime X-Lab 573 Jan 04, 2023
Extremely easy multi instancing software for minecraft speedrunning.

Easy Multi Extremely easy multi/single instancing software for minecraft speedrunning. A couple of goals of this project: Setup multi in minutes No fi

Duncan 8 Jul 16, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo.

PyLabel pip install pylabel PyLabel is a Python package to help you prepare image datasets for computer vision models including PyTorch and YOLOv5. I

PyLabel Project 176 Jan 01, 2023
Machine Learning Toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. Documentation Please refer to the official do

Kubeflow 12.1k Jan 03, 2023
Simple PyTorch hierarchical models.

A python package adding basic hierarchal networks in pytorch for classification tasks. It implements a simple hierarchal network structure based on feed-backward outputs.

Rajiv Sarvepalli 5 Mar 06, 2022
Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation

NVIDIA Research Projects 4.8k Jan 09, 2023
Code for layerwise detection of linguistic anomaly paper (ACL 2021)

Layerwise Anomaly This repository contains the source code and data for our ACL 2021 paper: "How is BERT surprised? Layerwise detection of linguistic

6 Dec 07, 2022
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Xinyu Huang 28 Oct 27, 2022
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition Paper: MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition accepted fo

64 Dec 18, 2022
Hydra: an Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems

Hydra: An Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems Paper Finding Semantic Bugs in File Systems with an Extensible Fuzzin

gts3.org (<a href=[email protected])"> 129 Dec 15, 2022
A hybrid SOTA solution of LiDAR panoptic segmentation with C++ implementations of point cloud clustering algorithms. ICCV21, Workshop on Traditional Computer Vision in the Age of Deep Learning

ICCVW21-TradiCV-Survey-of-LiDAR-Cluster Motivation In contrast to popular end-to-end deep learning LiDAR panoptic segmentation solutions, we propose a

YimingZhao 103 Nov 22, 2022
PyTorch Lightning + Hydra. A feature-rich template for rapid, scalable and reproducible ML experimentation with best practices. ⚡🔥⚡

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Łukasz Zalewski 2.1k Jan 09, 2023
A PyTorch Implementation of Neural IMage Assessment

NIMA: Neural IMage Assessment This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment (accepted at IEEE Transactions on Image Proc

yunxiaos 418 Dec 29, 2022