KE-Dialogue: Injecting knowledge graph into a fully end-to-end dialogue system.

Overview

Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems

License: MIT

This is the implementation of the paper:

Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems. Andrea Madotto, Samuel Cahyawijaya, Genta Indra Winata, Yan Xu, Zihan Liu, Zhaojiang Lin, Pascale Fung Findings of EMNLP 2020 [PDF]

If you use any source codes or datasets included in this toolkit in your work, please cite the following paper. The bibtex is listed below:

@article{madotto2020learning,
  title={Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems},
  author={Madotto, Andrea and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Yan and Liu, Zihan and Lin, Zhaojiang and Fung, Pascale},
  journal={arXiv preprint arXiv:2009.13656},
  year={2020}
}

Abstract

Task-oriented dialogue systems are either modularized with separate dialogue state tracking (DST) and management steps or end-to-end trainable. In either case, the knowledge base (KB) plays an essential role in fulfilling user requests. Modularized systems rely on DST to interact with the KB, which is expensive in terms of annotation and inference time. End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries. In this paper, we propose a method to embed the KB, of any size, directly into the model parameters. The resulting model does not require any DST or template responses, nor the KB as input, and it can dynamically update its KB via finetuning. We evaluate our solution in five taskoriented dialogue datasets with small, medium, and large KB size. Our experiments show that end-to-end models can effectively embed knowledge bases in their parameters and achieve competitive performance in all evaluated datasets.

Knowledge-embedded Dialogue:

During training, the KE dialogues are generated by fulfilling the *TEMPLATE* with the *user goal query* results, and they are used to embed the KB into the model parameter theta. At testing time, the model does not use any external knowledge to generate the correct responses.

Dependencies

We listed our dependencies on requirements.txt, you can install the dependencies by running

❱❱❱ pip install -r requirements.txt

In addition, our code also includes fp16 support with apex. You can find the package from https://github.com/NVIDIA/apex.

Experiments

bAbI-5

Dataset Download the preprocessed dataset and put the zip file inside the ./knowledge_embed/babi5 folder. Extract the zip file by executing

❱❱❱ cd ./knowledge_embed/babi5
❱❱❱ unzip dialog-bAbI-tasks.zip

Generate the delexicalized dialogues from bAbI-5 dataset via

❱❱❱ python3 generate_delexicalization_babi.py

Generate the lexicalized data from bAbI-5 dataset via

❱❱❱ python generate_dialogues_babi5.py --dialogue_path ./dialog-bAbI-tasks/dialog-babi-task5trn_record-delex.txt --knowledge_path ./dialog-bAbI-tasks/dialog-babi-kb-all.txt --output_folder ./dialog-bAbI-tasks --num_augmented_knowledge <num_augmented_knowledge> --num_augmented_dialogue <num_augmented_dialogues> --random_seed 0

Where the maximum <num_augmented_knowledge> is 558 (recommended) and <num_augmented_dialogues> is 264 as it is corresponds to the number of knowledge and number of dialogues in bAbI-5 dataset.

Fine-tune GPT-2

We provide the checkpoint of GPT-2 model fine-tuned on bAbI training set. You can also choose to train the model by yourself using the following command.

❱❱❱ cd ./modeling/babi5
❱❱❱ python main.py --model_checkpoint gpt2 --dataset BABI --dataset_path ../../knowledge_embed/babi5/dialog-bAbI-tasks --n_epochs <num_epoch> --kbpercentage <num_augmented_dialogues>

Notes that the value of --kbpercentage is equal to <num_augmented_dialogues> the one that comes from the lexicalization. This parameter is used for selecting the augmentation file to embed into the train dataset.

You can evaluate the model by executing the following script

❱❱❱ python evaluate.py --model_checkpoint <model_checkpoint_folder> --dataset BABI --dataset_path ../../knowledge_embed/babi5/dialog-bAbI-tasks

Scoring bAbI-5 To run the scorer for bAbI-5 task model, you can run the following command. Scorer will read all of the result.json under runs folder generated from evaluate.py

python scorer_BABI5.py --model_checkpoint <model_checkpoint> --dataset BABI --dataset_path ../../knowledge_embed/babi5/dialog-bAbI-tasks --kbpercentage 0

CamRest

Dataset

Download the preprocessed dataset and put the zip file under ./knowledge_embed/camrest folder. Unzip the zip file by executing

❱❱❱ cd ./knowledge_embed/camrest
❱❱❱ unzip CamRest.zip

Generate the delexicalized dialogues from CamRest dataset via

❱❱❱ python3 generate_delexicalization_CAMREST.py

Generate the lexicalized data from CamRest dataset via

❱❱❱ python generate_dialogues_CAMREST.py --dialogue_path ./CamRest/train_record-delex.txt --knowledge_path ./CamRest/KB.json --output_folder ./CamRest --num_augmented_knowledge <num_augmented_knowledge> --num_augmented_dialogue <num_augmented_dialogues> --random_seed 0

Where the maximum <num_augmented_knowledge> is 201 (recommended) and <num_augmented_dialogues> is 156 quite huge as it is corresponds to the number of knowledge and number of dialogues in CamRest dataset.

Fine-tune GPT-2

We provide the checkpoint of GPT-2 model fine-tuned on CamRest training set. You can also choose to train the model by yourself using the following command.

❱❱❱ cd ./modeling/camrest/
❱❱❱ python main.py --model_checkpoint gpt2 --dataset CAMREST --dataset_path ../../knowledge_embed/camrest/CamRest --n_epochs <num_epoch> --kbpercentage <num_augmented_dialogues>

Notes that the value of --kbpercentage is equal to <num_augmented_dialogues> the one that comes from the lexicalization. This parameter is used for selecting the augmentation file to embed into the train dataset.

You can evaluate the model by executing the following script

❱❱❱ python evaluate.py --model_checkpoint <model_checkpoint_folder> --dataset CAMREST --dataset_path ../../knowledge_embed/camrest/CamRest

Scoring CamRest To run the scorer for bAbI 5 task model, you can run the following command. Scorer will read all of the result.json under runs folder generated from evaluate.py

python scorer_CAMREST.py --model_checkpoint <model_checkpoint> --dataset CAMREST --dataset_path ../../knowledge_embed/camrest/CamRest --kbpercentage 0

SMD

Dataset

Download the preprocessed dataset and put it under ./knowledge_embed/smd folder.

❱❱❱ cd ./knowledge_embed/smd
❱❱❱ unzip SMD.zip

Fine-tune GPT-2

We provide the checkpoint of GPT-2 model fine-tuned on SMD training set. Download the checkpoint and put it under ./modeling folder.

❱❱❱ cd ./knowledge_embed/smd
❱❱❱ mkdir ./runs
❱❱❱ unzip ./knowledge_embed/smd/SMD_gpt2_graph_False_adj_False_edge_False_unilm_False_flattenKB_False_historyL_1000000000_lr_6.25e-05_epoch_10_weighttie_False_kbpercentage_0_layer_12.zip -d ./runs

You can also choose to train the model by yourself using the following command.

❱❱❱ cd ./modeling/smd
❱❱❱ python main.py --dataset SMD --lr 6.25e-05 --n_epochs 10 --kbpercentage 0 --layers 12

Prepare Knowledge-embedded dialogues

Firstly, we need to build databases for SQL query.

❱❱❱ cd ./knowledge_embed/smd
❱❱❱ python generate_dialogues_SMD.py --build_db --split test

Then we generate dialogues based on pre-designed templates by domains. The following command enables you to generate dialogues in weather domain. Please replace weather with navigate or schedule in dialogue_path and domain arguments if you want to generate dialogues in the other two domains. You can also change number of templates used in relexicalization process by changing the argument num_augmented_dialogue.

❱❱❱ python generate_dialogues_SMD.py --split test --dialogue_path ./templates/weather_template.txt --domain weather --num_augmented_dialogue 100 --output_folder ./SMD/test

Adapt fine-tuned GPT-2 model to the test set

❱❱❱ python evaluate_finetune.py --dataset SMD --model_checkpoint runs/SMD_gpt2_graph_False_adj_False_edge_False_unilm_False_flattenKB_False_historyL_1000000000_lr_6.25e-05_epoch_10_weighttie_False_kbpercentage_0_layer_12 --top_k 1 --eval_indices 0,303 --filter_domain ""

You can also speed up the finetuning process by running experiments parallelly. Please modify the GPU setting in #L14 of the code.

❱❱❱ python runner_expe_SMD.py 

MWOZ (2.1)

Dataset

Download the preprocessed dataset and put it under ./knowledge_embed/mwoz folder.

❱❱❱ cd ./knowledge_embed/mwoz
❱❱❱ unzip mwoz.zip

Prepare Knowledge-Embedded dialogues (You can skip this step, if you have downloaded the zip file above)

You can prepare the datasets by running

❱❱❱ bash generate_MWOZ_all_data.sh

The shell script generates the delexicalized dialogues from MWOZ dataset by calling

❱❱❱ python generate_delex_MWOZ_ATTRACTION.py
❱❱❱ python generate_delex_MWOZ_HOTEL.py
❱❱❱ python generate_delex_MWOZ_RESTAURANT.py
❱❱❱ python generate_delex_MWOZ_TRAIN.py
❱❱❱ python generate_redelex_augmented_MWOZ.py
❱❱❱ python generate_MWOZ_dataset.py

Fine-tune GPT-2

We provide the checkpoint of GPT-2 model fine-tuned on MWOZ training set. Download the checkpoint and put it under ./modeling folder.

❱❱❱ cd ./knowledge_embed/mwoz
❱❱❱ mkdir ./runs
❱❱❱ unzip ./mwoz.zip -d ./runs

You can also choose to train the model by yourself using the following command.

❱❱❱ cd ./modeling/mwoz
❱❱❱ python main.py --model_checkpoint gpt2 --dataset MWOZ_SINGLE --max_history 50 --train_batch_size 6 --kbpercentage 100 --fp16 O2 --gradient_accumulation_steps 3 --balance_sampler --n_epochs 10

OpenDialKG

Getting Started We use neo4j community server edition and apoc library for processing graph data. apoc is used to parallelize the query in neo4j, so that we can process large scale graph faster

Before proceed to the dataset section, you need to ensure that you have neo4j (https://neo4j.com/download-center/#community) and apoc (https://neo4j.com/developer/neo4j-apoc/) installed on your system.

If you are not familiar with CYPHER and apoc syntaxes, you can follow the tutorial in https://neo4j.com/developer/cypher/ and https://neo4j.com/blog/intro-user-defined-procedures-apoc/

Dataset Download the original dataset and put the zip file inside the ./knowledge_embed/opendialkg folder. Extract the zip file by executing

❱❱❱ cd ./knowledge_embed/opendialkg
❱❱❱ unzip https://drive.google.com/file/d/1llH4-4-h39sALnkXmGR8R6090xotE0PE/view?usp=sharing.zip

Generate the delexicalized dialogues from opendialkg dataset via (WARNING: this requires around 12 hours to run)

❱❱❱ python3 generate_delexicalization_DIALKG.py

This script will produce ./opendialkg/dialogkg_train_meta.pt which will be use to generate the lexicalized dialogue. You can then generate the lexicalized dialogue from opendialkg dataset via

❱❱❱ python generate_dialogues_DIALKG.py --random_seed <random_seed> --batch_size 100 --max_iteration <max_iter> --stop_count <stop_count> --connection_string bolt://localhost:7687

This script will produce samples of dialogues at most batch_size * max_iter samples, but in every batch there is a possibility where there is no valid candidate and resulting in less samples. The number of generation is limited by another factor called stop_count which will stop the generation if the number of generated samples is more than equal the specified stop_count. The file will produce 4 files: ./opendialkg/db_count_records_{random_seed}.csv, ./opendialkg/used_count_records_{random_seed}.csv, and ./opendialkg/generation_iteration_{random_seed}.csv which are used for checking the distribution shift of the count in the DB; and ./opendialkg/generated_dialogue_bs100_rs{random_seed}.json which contains the generated samples.

Notes:

  • You might need to change the neo4j password inside generate_delexicalization_DIALKG.py and generate_dialogues_DIALKG.py manually.
  • Because there is a ton of possibility of connection in dialkg, we use sampling method to generate the data, so random seed is crucial if you want to have reproducible result

Fine-tune GPT-2

We provide the checkpoint of GPT-2 model fine-tuned on opendialkg training set. You can also choose to train the model by yourself using the following command.

❱❱❱ cd ./modeling/opendialkg
❱❱❱ python main.py --dataset_path ../../knowledge_embed/opendialkg/opendialkg --model_checkpoint gpt2 --dataset DIALKG --n_epochs 50 --kbpercentage <random_seed> --train_batch_size 8 --valid_batch_size 8

Notes that the value of --kbpercentage is equal to <random_seed> the one that comes from the lexicalization. This parameter is used for selecting the augmentation file to embed into the train dataset.

You can evaluate the model by executing the following script

❱❱❱ python evaluate.py  --model_checkpoint <model_checkpoint_folder> --dataset DIALKG --dataset_path  ../../knowledge_embed/opendialkg/opendialkg

Scoring OpenDialKG To run the scorer for bAbI-5 task model, you can run the following command. Scorer will read all of the result.json under runs folder generated from evaluate.py

python scorer_DIALKG5.py --model_checkpoint <model_checkpoint> --dataset DIALKG  ../../knowledge_embed/opendialkg/opendialkg --kbpercentage 0

Further Details

For the details regarding to the experiments, hyperparameters, and Evaluation results you can find it in the main paper of and suplementary materials of our work.

Owner
CAiRE
CAiRE
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Filippo Bianchi 249 Dec 21, 2022
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 03, 2023
GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles

GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles This repository contains a method to generate 3D conformer ensembles direct

127 Dec 20, 2022
8-week curriculum for AI Builders

curriculum 8-week curriculum for AI Builders สารบัญ บทที่ 1 - Machine Learning คืออะไร บทที่ 2 - ชุดข้อมูลมหัศจรรย์และถิ่นที่อยู่ บทที่ 3 - Stochastic

AI Builders 134 Jan 03, 2023
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
Codes accompanying the paper "Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning" (NeurIPS 2021 Spotlight

Implicit Constraint Q-Learning This is a pytorch implementation of ICQ on Datasets for Deep Data-Driven Reinforcement Learning (D4RL) and ICQ-MA on SM

42 Dec 23, 2022
A setup script to generate ITK Python Wheels

ITK Python Package This project provides a setup.py script to build ITK Python binary packages and infrastructure to build ITK external module Python

Insight Software Consortium 59 Dec 14, 2022
Deep Reinforcement Learning based autonomous navigation for quadcopters using PPO algorithm.

PPO-based Autonomous Navigation for Quadcopters This repository contains an implementation of Proximal Policy Optimization (PPO) for autonomous naviga

Bilal Kabas 16 Nov 11, 2022
Code for the Higgs Boson Machine Learning Challenge organised by CERN & EPFL

A method to solve the Higgs boson challenge using Least Squares - Novae This project is the Project 1 of EPFL CS-433 Machine Learning. The project is

Giacomo Orsi 1 Nov 09, 2021
[ICML 2021] "Graph Contrastive Learning Automated" by Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang

Graph Contrastive Learning Automated PyTorch implementation for Graph Contrastive Learning Automated [talk] [poster] [appendix] Yuning You, Tianlong C

Shen Lab at Texas A&M University 80 Nov 23, 2022
DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download

Bubbliiiing 31 Nov 25, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transformer for Efficient Spatiotemporal Representation Learning". It curren

SenseTime X-Lab 573 Jan 04, 2023
Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at [email protected]

TableParser Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at DS3 Lab 11 Dec 13, 2022

Implementation of the paper ''Implicit Feature Refinement for Instance Segmentation''.

Implicit Feature Refinement for Instance Segmentation This repository is an official implementation of the ACM Multimedia 2021 paper Implicit Feature

Lufan Ma 17 Dec 28, 2022
Learning Super-Features for Image Retrieval

Learning Super-Features for Image Retrieval This repository contains the code for running our FIRe model presented in our ICLR'22 paper: @inproceeding

NAVER 101 Dec 28, 2022
Adversarial Self-Defense for Cycle-Consistent GANs

Adversarial Self-Defense for Cycle-Consistent GANs This is the official implementation of the CycleGAN robust to self-adversarial attacks used in pape

Dina Bashkirova 10 Oct 10, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 01, 2023
InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

Deep Insight 13.2k Jan 06, 2023
Summary of related papers on visual attention

This repo is built for paper: Attention Mechanisms in Computer Vision: A Survey paper Vision-Attention-Papers Channel attention Spatial attention Temp

MenghaoGuo 2.1k Dec 30, 2022