The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

Related tags

Deep LearningD-REX
Overview

The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

How do I cite D-REX?

For now, cite the Arxiv paper

@article{albalak2021drex,
      title={D-REX: Dialogue Relation Extraction with Explanations}, 
      author={Alon Albalak and Varun Embar and Yi-Lin Tuan and Lise Getoor and William Yang Wang},
      journal={arXiv preprint arXiv:2109.05126},
      year={2021},
}

To train the full system:

GPU=0
bash train_drex_system.sh $GPU

Notes:

  • The training script is set up to work with an NVIDIA Titan RTX (24Gb memory, mixed-precision)
  • To train on a GPU with less memory, adjust the GPU_BATCH_SIZE parameter in train_drex_system.sh to match your memory limit.
  • Training the full system takes ~24 hours on a single NVIDIA Titan RTX

To test the trained system:

GPU=0
bash test_drex_system.sh $GPU

To train/test individual modules:

  • Relation Extraction Model -
    • Training:
      GPU=0
      MODEL_PATH=relation_extraction_model
      mkdir $MODEL_PATH
      CUDA_VISIBLE_DEVICES=$GPU python3 train_relation_extraction_model.py \
          --model_class=relation_extraction_roberta \
          --model_name_or_path=roberta-base \
          --base_model=roberta-base \
          --effective_batch_size=30 \
          --gpu_batch_size=30 \
          --fp16 \
          --output_dir=$MODEL_PATH \
          --relation_extraction_pretraining \
          > $MODEL_PATH/train_outputs.log
    • Testing:
      GPU=0
      MODEL_PATH=relation_extraction_model
      BEST_MODEL=$(ls $MODEL_PATH/F1* -d | sort -r | head -n 1)
      THRESHOLD1=$(echo $BEST_MODEL | grep -o "T1.....")
      THRESHOLD1=${THRESHOLD1: -2}
      THRESHOLD2=$(echo $BEST_MODEL | grep -o "T2.....")
      THRESHOLD2=${THRESHOLD2: -2}
      CUDA_VISIBLE_DEVICES=0 python3 test_relation_extraction_model.py \
          --model_class=relation_extraction_roberta \
          --model_name_or_path=$BEST_MODEL \
          --base_model=roberta-base \
          --relation_extraction_pretraining \
          --threshold1=$THRESHOLD1 \
          --threshold2=$THRESHOLD2 \
          --data_split=test
  • Explanation Extraction Model -
    • Training:
      GPU=0
      MODEL_PATH=explanation_extraction_model
      mkdir $MODEL_PATH
      CUDA_VISIBLE_DEVICES=$GPU python3 train_explanation_policy.py \
          --model_class=explanation_policy_roberta \
          --model_name_or_path=roberta-base \
          --base_model=roberta-base \
          --effective_batch_size=30 \
          --gpu_batch_size=30 \
          --fp16 \
          --output_dir=$MODEL_PATH \
          --explanation_policy_pretraining \
          > $MODEL_PATH/train_outputs.log    
    • Testing:
      GPU=0
      MODEL_PATH=explanation_extraction_model
      BEST_MODEL=$(ls $MODEL_PATH/F1* -d | sort -r | head -n 1)
      CUDA_VISIBLE_DEVICES=$GPU python3 test_explanation_policy.py \
          --model_class=explanation_policy_roberta \
          --model_name_or_path=$BEST_MODEL \
          --base_model=roberta-base \
          --explanation_policy_pretraining \
          --data_split=test
Owner
Alon Albalak
Alon Albalak
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
Diffusion Normalizing Flow (DiffFlow) Neurips2021

Diffusion Normalizing Flow (DiffFlow) Reproduce setup environment The repo heavily depends on jam, a personal toolbox developed by Qsh.zh. The API may

76 Jan 01, 2023
Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL)

LUPerson-NL Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL) The repository is for our CVPR2022 paper Large-Scale

43 Dec 26, 2022
Official implementation of Few-Shot and Continual Learning with Attentive Independent Mechanisms

Few-Shot and Continual Learning with Attentive Independent Mechanisms This repository is the official implementation of Few-Shot and Continual Learnin

Chikan_Huang 25 Dec 08, 2022
Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Google Research 340 Jan 03, 2023
GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification

GalaXC GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification @InProceedings{Saini21, author = {Saini, D. and Jain,

Extreme Classification 28 Dec 05, 2022
OBBDetection is a oriented object detection library, which is based on MMdetection.

OBBDetection news: We are now updating OBBDetection to new vision based on MMdetection v2.10, which has more advanced models and more efficient featur

jbwang1997 401 Jan 02, 2023
AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

AtlasNet [Project Page] [Paper] [Talk] AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation Thibault Groueix, Matthew Fisher, Vladimir

577 Dec 17, 2022
FluxTraining.jl gives you an endlessly extensible training loop for deep learning

A flexible neural net training library inspired by fast.ai

86 Dec 31, 2022
Graph neural network message passing reframed as a Transformer with local attention

Adjacent Attention Network An implementation of a simple transformer that is equivalent to graph neural network where the message passing is done with

Phil Wang 49 Dec 28, 2022
Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

RGF-team 364 Dec 28, 2022
PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning"

PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning".

Berivan Isik 8 Dec 08, 2022
Easy-to-use library to boost AI inference leveraging state-of-the-art optimization techniques.

NEW RELEASE How Nebullvm Works • Tutorials • Benchmarks • Installation • Get Started • Optimization Examples Discord | Website | LinkedIn | Twitter Ne

Nebuly 1.7k Dec 31, 2022
Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning

CSRL Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning Python: 3

4 Apr 14, 2022
Tensorflow Implementation of ECCV'18 paper: Multimodal Human Motion Synthesis

MT-VAE for Multimodal Human Motion Synthesis This is the code for ECCV 2018 paper MT-VAE: Learning Motion Transformations to Generate Multimodal Human

Xinchen Yan 36 Oct 02, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021)

OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021) This is an PyTorch implementation of OpenMatc

Vision and Learning Group 38 Dec 26, 2022
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

87 Oct 19, 2022
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
Systemic Evolutionary Chemical Space Exploration for Drug Discovery

SECSE SECSE: Systemic Evolutionary Chemical Space Explorer Chemical space exploration is a major task of the hit-finding process during the pursuit of

64 Dec 16, 2022