Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Overview

Language Emergence in Multi Agent Dialog

Code for the Paper

Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra EMNLP 2017 (Best Short Paper)

If you find this code useful, please consider citing the original work by authors:

@inproceedings{visdial,
  title = {{N}atural {L}anguage {D}oes {N}ot {E}merge '{N}aturally' in {M}ulti-{A}gent {D}ialog},
  author = {Satwik Kottur and Jos\'e M.F. Moura and Stefan Lee and Dhruv Batra},
  journal = {CoRR},
  volume = {abs/1706.08502},
  year = {2017}
}

Introduction

This paper focuses on proving that the emergence of language by agent-dialogs is not necessarily compositional and human interpretable. To demonstrate this fact, the paper uses a Image Guessing Game "Task and Talk" as a testbed. The game comprises of two bots, a questioner and answerer.

Answerer has an image attributes, as shown in figure. Questioner cannot see the image, and has a task of finding two attributes of the image (color, shape, style). Answerer does not know the task. Multiple rounds of q/a dialogs occur, after which the questioner has to guess the attributes. Reward to both bots is given on basis of prediction of questioner.

Task And Talk

Further, the paper discusses the ways to make the grounded language more compositional and human interpretable by restrictions on how two agents may communicate.

Setup

This repository is only compatible with Python3, as ParlAI imposes this restriction; it requires Python3.

  1. Follow instructions under Installing ParlAI section from ParlAI site.
  2. Follow instructions outlined on PyTorch Homepage for installing PyTorch (Python3).
  3. tqdm is used for providing progress bars, which can be downloaded via pip3.

Dataset Generation

Described in Section 2 and Figure 1 of paper. Synthetic dataset of shape attributes is generated using data/generate_data.py script. To generate the dataset, simply execute:

cd data
python3 generate_data.py
cd ..

This will create data/synthetic_dataset.json, with 80% training data (312 samples) and rest validation data (72 samples). Save path, size of dataset and split ratio can be changed through command line. For more information:

python3 generate_data.py --help

Dataset Schema

{
    "attributes": ["color", "shape", "style"],
    "properties": {
        "color": ["red", "green", "blue", "purple"],
        "shape": ["square", "triangle", "circle", "star"],
        "style": ["dotted", "solid", "filled", "dashed"]
    },
    "split_data": {
        "train": [ ["red", "square", "solid"], ["color2", "shape2", "style2"] ],
        "val": [ ["green", "star", "dashed"], ["color2", "shape2", "style2"] ]
    },
    "task_defn": [ [0, 1], [1, 0], [0, 2], [2, 0], [1, 2], [2, 1] ]
}

A custom Pytorch Dataset class is written in dataloader.py which ingests this dataset and provides random batch / complete data while training and validation.

Training

Training happens through train.py, which iteratively carries out multiple rounds of dialog in each episode, between our ParlAI Agents - QBot and ABot, both placed in a ParlAI World. The dialog is completely cooperative - both bots receive same reward after each episode.

This script prints the cumulative reward, training accuracy and validation accuracy after fixed number of iterations. World checkpoints are saved after regular intervals as well.

Training is controlled by various options, which can be passed through command line. All of them have suitable default values set in options.py, although they can be tinkered easily. They can also be viewed as:

python3 train.py --help   # view command line args (you need not change "Main ParlAI Arguments")

Questioner and Answerer bot classes are defined in bots.py and World is defined in world.py. Paper describes three configurations for training:

Overcomplete Vocabulary

Described in Section 4.1 of paper. Both QBot and Abot will have vocabulary size equal to number of possible objects (64).

python3 train.py --data-path /path/to/json --q-out-vocab 64 --a-out-vocab 64

Attribute-Value Vocabulary

Described in Section 4.2 of paper. Both QBot will have vocab size 3 (color, shape, style) and Abot will have vocabulary size equal to number of possible attribute values (4 * 3).

python3 train.py --data-path /path/to/json --q-out-vocab 3 --a-out-vocab 12

Memoryless ABot, Minimal Vocabulary (best)

Described in Section 4.3 of paper. Both QBot will have vocab size 3 (color, shape, style) and Abot will have vocabulary size equal to number of possible values per attribute (4).

python3 train.py --q-out-vocab 3 --a-out-vocab 4 --data-path /path/to/json --memoryless-abot

Checkpoints would be saved by default in checkpoints directory every 100 epochs. Be default, CPU is used for training. Include --use-gpu in command-line to train using GPU.

Refer script docstring and inline comments in train.py for understanding of execution.

Evaluation

Saved world checkpoints can be evaluated using the evaluate.py script. Besides evaluation, the dialog between QBot and ABot for all examples can be saved in JSON format. For evaluation:

python3 evaluate.py --load-path /path/to/pth/checkpoint

Save the conversation of bots by providing --save-conv-path argument. For more information:

python3 evaluate.py --help

Evaluation script reports training and validation accuracies of the world. Separate accuracies for first attribute match, second attribute match, both match and atleast one match are reported.

Sample Conversation

Im: ['purple', 'triangle', 'filled'] -  Task: ['shape', 'color']
    Q1: X    A1: 2
    Q2: Y    A2: 0
    GT: ['triangle', 'purple']  Pred: ['triangle', 'purple']

Pretrained World Checkpoint

Best performing world checkpoint has been released here, along with details to reconstruct the world object using this checkpoint.

Reported metrics:

Overall accuracy [train]: 96.47 (first: 97.76, second: 98.72, atleast_one: 100.00)
Overall accuracy [val]: 98.61 (first: 98.61, second: 100.00, atleast_one: 100.00)

TODO: Visualizing evolution chart - showing emergence of grounded language.

References

  1. Satwik Kottur, José M.F.Moura, Stefan Lee, Dhruv Batra. Natural Language Does Not Emerge Naturally in Multi-Agent Dialog. EMNLP 2017. [arxiv]
  2. Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston. ParlAI: A Dialog Research Software Platform. 2017. [arxiv]
  3. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M.F. Moura, Devi Parikh and Dhruv Batra. Visual Dialog. CVPR 2017. [arxiv]
  4. Abhishek Das, Satwik Kottur, José M.F. Moura, Stefan Lee, and Dhruv Batra. Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning. ICCV 2017. [arxiv]
  5. ParlAI Docs. [http://parl.ai/static/docs/index.html]
  6. PyTorch Docs. [http://pytorch.org/docs/master]

Standing on the Shoulders of Giants

The ease of implementing this paper using ParlAI framework is heavy accredited to the original source code released by authors of this paper. [batra-mlp-lab/lang-emerge]

License

BSD

You might also like...
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Don’t be Contradicted with Anything!CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System This repository contains the PyTorch im

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Oriented Response Networks, in CVPR 2017
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

Improving Convolutional Networks via Attention Transfer (ICLR 2017)
Improving Convolutional Networks via Attention Transfer (ICLR 2017)

Attention Transfer PyTorch code for "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Tran

meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

🌈 PyTorch Implementation for EMNLP'21 Findings
🌈 PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"

SGLKT-VisDial Pytorch Implementation for the paper: Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer Gi-Cheon Kang, Junseok P

This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Releases(v1.0)
  • v1.0(Nov 10, 2017)

    Attached checkpoint was the best one when the following script was executed at this commit:

    python3 train.py --use-gpu --memoryless-abot --num-epochs 99999
    

    Evaluation of the checkpoint:

    python3 evaluate.py --load-path world_best.pth 
    

    Reported metrics:

    Overall accuracy [train]: 96.47 (first: 97.76, second: 98.72, atleast_one: 100.00)
    Overall accuracy [val]: 98.61 (first: 98.61, second: 100.00, atleast_one: 100.00)
    

    Minimal snippet to reconstruct the world using this checkpoint:

    import torch
    
    from bots import Questioner, Answerer
    from world import QAWorld
    
    world_dict = torch.load('path/to/checkpoint.pth')
    questioner = Questioner(world_dict['opt'])
    answerer = Answerer(world_dict['opt'])
    if world_dict['opt'].get('use_gpu'):
        questioner, answerer = questioner.cuda(), answerer.cuda()
    
    questioner.load_state_dict(world_dict['qbot'])
    answerer.load_state_dict(world_dict['abot'])
    world = QAWorld(world_dict['opt'], questioner, answerer)
    
    Source code(tar.gz)
    Source code(zip)
    world_best.pth(679.17 KB)
Owner
Karan Desai
Karan Desai
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Turning SymPy expressions into JAX functions

sympy2jax Turn SymPy expressions into parametrized, differentiable, vectorizable, JAX functions. All SymPy floats become trainable input parameters. S

Miles Cranmer 38 Dec 11, 2022
Deep Anomaly Detection with Outlier Exposure (ICLR 2019)

Outlier Exposure This repository contains the essential code for the paper Deep Anomaly Detection with Outlier Exposure (ICLR 2019). Requires Python 3

Dan Hendrycks 464 Dec 27, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
PlenOctrees: NeRF-SH Training & Conversion

PlenOctrees Official Repo: NeRF-SH training and conversion This repository contains code to train NeRF-SH and to extract the PlenOctree, constituting

Alex Yu 323 Dec 29, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
Pytorch implementation of NeurIPS 2021 paper: Geometry Processing with Neural Fields.

Geometry Processing with Neural Fields Pytorch implementation for the NeurIPS 2021 paper: Geometry Processing with Neural Fields Guandao Yang, Serge B

Guandao Yang 162 Dec 16, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
Convnet transfer - Code for paper How transferable are features in deep neural networks?

How transferable are features in deep neural networks? This repository contains source code necessary to reproduce the results presented in the follow

Jason Yosinski 143 Sep 13, 2022
Course on computational design, non-linear optimization, and dynamics of soft systems at UIUC.

Computational Design and Dynamics of Soft Systems · This is a repository that contains the source code for generating the lecture notes, handouts, exe

Tejaswin Parthasarathy 4 Jul 21, 2022
Simple ray intersection library similar to coldet - succedeed by libacc

Ray Intersection This project offers a header only acceleration structure library including implementations for a BVH- and KD-Tree. Applications may i

Nils Moehrle 29 Jun 23, 2022
Crowd-sourced Annotation of Human Motion.

Motion Annotation Tool Live: https://motion-annotation.humanoids.kit.edu Paper: The KIT Motion-Language Dataset Installation Start by installing all P

Matthias Plappert 4 May 25, 2020
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.

WILDS is a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, from tumor identification to wildlife monitoring to poverty mapping.

P-Lambda 437 Dec 30, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consist

THUML @ Tsinghua University 2.2k Jan 03, 2023
JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces

JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces JAXMAPP is a JAX-based library for multi-agent path planning (MAPP) in c

OMRON SINIC X 24 Dec 28, 2022
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Cross-Speaker-Emotion-Transfer - PyTorch Implementation PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Conditio

Keon Lee 114 Jan 08, 2023
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 05, 2023
An Intelligent Self-driving Truck System For Highway Transportation

Inceptio Intelligent Truck System An Intelligent Self-driving Truck System For Highway Transportation Note The code is still in development. OS requir

InceptioResearch 11 Jul 13, 2022
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Tong WU 89 Dec 15, 2022