NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

Overview

NeuralWOZ

This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation".

Sungdong Kim, Minsuk Chang, Sang-woo Lee
In ACL 2021.

Citation

@inproceedings{kim2021neuralwoz,
  title={NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation},
  author={Kim, Sungdong and Chang, Minsuk and Lee, Sang-woo},
  booktitle={ACL},
  year={2021}
}

Requirements

python3.6
torch==1.4.0
transformers==2.11.0

Please install apex for the mixed precision training.
See details in requirements.txt

Data Download and Preprocessing

1. Download dataset

Please run this script at first. It will create data repository, and save and preprocess MultiWOZ 2.1 dataset.

python3 create_data.py

2. Preprocessing

To train NeuralWOZ under various settings, you should create each training instances with running below script.

python3 neuralwoz/preprocess.py --exceptd $TARGET_DOMAIN --fewshot_ratio $FEWSHOT_RATIO
  • exceptd: Specify "target domain" to exclude from training dataset for leave-one-out scheme. It is one of the (hotel|restaurant|attraction|train|taxi).
  • fewshot_ratio: Choose proportion of examples in the target domain to include. Default is 0. which means zero-shot. It is one of the (0.|0.01|0.05|0.1). You can check the fewshot examples in the assets/fewshot_key.json.

This script will create "$TARGET_DOMAIN_$FEWSHOT_RATIO_collector_(train|dev).json" and "$TARGET_DOMAIN_$FEWSHOT_RATIO_labeler_train.h5".

Training NeuralWOZ

You should specify output_path to save the trained model.
Each output consists of the below four files after the training.

  • pytorch_model.bin
  • config.json
  • vocab.json
  • merges.txt

For each zero/few-shot settings, you should set the TRAIN_DATA and DEV_DATA from the preprocessing. For example, hotel_0.0_collector_(train|dev).json should be used for the Collector training when the target domain is hotel in the zero-shot domain transfer task.

We use N_GPU=4 and N_ACCUM=2 for Collector training and N_GPU=2 and N_ACCUM=2 for Labeler training to fit 32 for batch size based on V100 32GB GPU.

1. Collector

python3 neuralwoz/train_collector.py \
  --dataset_dir data \
  --output_path $OUTPUT_PATH \
  --model_name_or_path facebook/bart-large \
  --train_data $TRAIN_DATA \
  --dev_data $DEV_DATA \
  --n_gpu $N_GPU \
  --per_gpu_train_batch_size 4 \
  --num_train_epochs 30 \
  --learning_rate 1e-5 \
  --gradient_accumulation_steps $N_ACCUM \
  --warmup_steps 1000 \
  --fp16

2. Labeler

python3 neuralwoz/train_labeler.py \
  --dataset_dir data \
  --output_path $OUTPUT_PATH \
  --model_name_or_path roberta-base-dream \
  --train_data $TRAIN_DATA \
  --dev_data labeler_dev_data.json \
  --n_gpu $N_GPU \
  --per_gpu_train_batch_size 8 \
  --num_train_epochs 10 \
  --learning_rate 1e-5 \
  --gradient_accumulation_steps $N_ACCUM \
  --warmup_steps 1000 \
  --beta 5. \
  --fp16

Download Synthetic Dialogues from NeuralWOZ

Please download synthetic dialogues from here

  • The naming convention is nwoz_{target_domain}_{fewshot_proportion}.json
  • Each dataset contains synthesized dialogues from our NeuralWOZ
  • Specifically, It contains synthetic dialogues for the target_domain while excluding original dialogues for the target domain (leave-one-out setup)
  • You can check the i-th synthesized dialogue in each files with aug_{target_domain}_{fewshot_proprotion}_{i} for dialogue_idx key.
  • You can use the json file to directly train zero/few-shot learner for DST task
  • Please see readme for training TRADE and readme for training SUMBT using the dataset
  • If you want to synthesize your own dialogues, please see below sections.

Download Pretrained Models

Pretrained models are available in this link. The naming convention is like below

  • NEURALWOZ: (Collector|Labeler)_{target_domain}_{fewshot_proportion}.tar.gz
  • TRADE: nwoz_TRADE_{target_domain}_{fewshot_proportion}.tar.gz
  • SUMBT: nwoz_SUMBT_{target_domain}_{fewshot_proportion}.tar.gz

To synthesize your own dialogues, please download and unzip both of Collector and Labeler in same target domain and fewshot_proportion at $COLLECTOR_PATH and $LABELER_PATH, repectively.

Please use tar -zxvf MODEL.tar.gz for the unzipping.

Generate Synthetic Dialogues using NeuralWOZ

python3 neuralwoz/run_neuralwoz.py \
  --dataset_dir data \
  --output_dir data \
  --output_file_name neuralwoz-output.json \
  --target_data collector_dev_data.json \
  --include_domain $TARGET_DOMAIN \
  --collector_path $COLLECTOR_PATH \
  --labeler_path $LABELER_PATH \
  --num_dialogues $NUM_DIALOGUES \
  --batch_size 16 \
  --num_beams 1 \
  --top_k 0 \
  --top_p 0.98 \
  --temperature 0.9 \
  --include_missing_dontcare

License

Copyright 2021-present NAVER Corp.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Owner
NAVER AI
Official account of NAVER AI, Korea No.1 Industrial AI Research Group
NAVER AI
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
Official Implementation for the "An Empirical Investigation of 3D Anomaly Detection and Segmentation" paper.

An Empirical Investigation of 3D Anomaly Detection and Segmentation Project | Paper Official PyTorch Implementation for the "An Empirical Investigatio

Eliahu Horwitz 55 Dec 14, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers ๐Ÿ”ฅ

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 27, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Trevor Ablett*, Bryan Chan*,

STARS Laboratory 8 Sep 14, 2022
The "breathing k-means" algorithm with datasets and example notebooks

The Breathing K-Means Algorithm (with examples) The Breathing K-Means is an approximation algorithm for the k-means problem that (on average) is bette

Bernd Fritzke 75 Nov 17, 2022
Pytorch implementation of forward and inverse Haar Wavelets 2D

Pytorch implementation of forward and inverse Haar Wavelets 2D

Sergei Belousov 9 Oct 30, 2022
Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization

Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization Code for reproducing our results in the Head2Toe paper. Paper: arxiv.or

Google Research 62 Dec 12, 2022
Official implementation for "Image Quality Assessment using Contrastive Learning"

Image Quality Assessment using Contrastive Learning Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik This is the offi

Pavan Chennagiri 67 Dec 30, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

<a href=[email protected]"> 156 Dec 15, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch ์„œ๊ฐ•๋Œ€ํ•™๊ต ํ˜„๋Œ€๋ชจ๋น„์Šค SW ํ”„๋กœ๊ทธ๋žจ์—์„œ ์ง„ํ–‰ํ•œ ์ธ๊ณต์ง€๋Šฅ ํ”„๋กœ์ ํŠธ์ž…๋‹ˆ๋‹ค. Jetson nano๋ฅผ ์ด์šฉํ•ด pre-trained network๋ฅผ fine tuning์‹œ์ผœ ์ฐจ๋Ÿ‰ ๋ฐ ์‹ ํ˜ธ๋“ฑ ์ธ์‹์„ ๊ตฌํ˜„ํ•˜์˜€์Šต๋‹ˆ๋‹ค

Haneul Kim 1 Nov 16, 2021
StrongSORT: Make DeepSORT Great Again

StrongSORT StrongSORT: Make DeepSORT Great Again StrongSORT: Make DeepSORT Great Again Yunhao Du, Yang Song, Bo Yang, Yanyun Zhao arxiv 2202.13514 Abs

369 Jan 04, 2023
FaceAnon - Anonymize people in images and videos using yolov5-crowdhuman

Face Anonymizer Blur faces from image and video files in /input/ folder. Require

22 Nov 03, 2022
Text completion with Hugging Face and TensorFlow.js running on Node.js

Katana ML Text Completion ๐Ÿค— Description Runs with with Hugging Face DistilBERT and TensorFlow.js on Node.js distilbert-model - converter from Hugging

Katana ML 2 Nov 04, 2022
Video Frame Interpolation with Transformer (CVPR2022)

VFIformer Official PyTorch implementation of our CVPR2022 paper Video Frame Interpolation with Transformer Dependencies python = 3.8 pytorch = 1.8.0

DV Lab 63 Dec 16, 2022
Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable.

Diffrax Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. Diffrax is a JAX-based library providing numerical differe

Patrick Kidger 717 Jan 09, 2023
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah

Hanxun Huang 98 Dec 07, 2022
The final project of "Applying AI to EHR Data" of "AI for Healthcare" nanodegree - Udacity.

Patient Selection for Diabetes Drug Testing Project Overview EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical ind

Omar Laham 1 Jan 14, 2022
Learning Neural Network Subspaces

Learning Neural Network Subspaces Welcome to the codebase for Learning Neural Network Subspaces by Mitchell Wortsman, Maxwell Horton, Carlos Guestrin,

Apple 117 Nov 17, 2022
Deep Face Recognition in PyTorch

Face Recognition in PyTorch By Alexey Gruzdev and Vladislav Sovrasov Introduction A repository for different experimental Face Recognition models such

Alexey Gruzdev 141 Sep 11, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022