A Partition Filter Network for Joint Entity and Relation Extraction EMNLP 2021

Overview

PFN (Partition Filter Network)

This repository contains codes of the official implementation for the paper A Partition Filter Network for Joint Entity and Relation Extraction EMNLP 2021 [PDF] [PPT]

Quick links

Model Overview

In this work, we present a new framework equipped with a novel recurrent encoder named partition filter encoder designed for multi-task learning. The encoder enforces bilateral interaction between NER and RE in two ways:

  1. The shared partition represents inter-task information and is equally accessible to both tasks, allowing for balanced interaction between NER and RE.
  2. The task partitions represent intra-task information and are formed through concerted efforts of entity and relation gates, making sure that encoding process of entity and relation features are dependent upon each other.

Preparation

Environment Setup

The experiments were performed using one single NVIDIA-RTX3090 GPU. The dependency packages can be installed with the following command:

pip install -r requirements.txt

Also, make sure that the python version is 3.7.10

Data Acquisition and Preprocessing

This is the first work that covers all the mainstream English datasets for evaluation, including [NYT, WEBNLG, ADE, ACE2005, ACE2004, SCIERC, CONLL04]. Please follow the instructions of reademe.md in each dataset folder in ./data/ for data acquisition and preprocessing.

Custom Dataset

If your custom dataset has a large number of triples that contain head-overlap entities (common in Chinese dataset), accuracy of the orignal PFN will not be good.

The orignal one will not be able to decode triples with head-overlap entities. For example, if New York and New York City are both entities, and there exists a RE prediction such as (new, cityof, USA), we cannot know what New corresponds to.

Luckily, the impact on evaluation of English dataset is limited, since such triple is either filtered out (for ADE) or rare (one in test set of SciERC, one in ACE04, zero in other datasets).

You can use our updated PFN-nested to handle the issue. PFN-nested is an enhanced version of PFN. This model is better in leveraging entity tail information and capable of handling nested triple prediction. For usage, replace the files in the root directory with the files in the PFN-nested folder, then follow the directions in Quick Start.

Performance comparison in SciERC

Model NER RE
PFN 66.8 38.4
PFN-nested 67.9 38.7

Quick Start

Model Training

The training command-line is listed below (command for CONLL04 is in Evaluation on CoNLL04):

python main.py \
--data ${NYT/WEBNLG/ADE/ACE2005/ACE2004/SCIERC} \
--do_train \
--do_eval \
--embed_mode ${bert_cased/albert/scibert} \
--batch_size ${20 (for most datasets) /4 (for SCIERC)} \
--lr ${0.00002 (for most datasets) /0.00001 (for SCIERC)} \
--output_file ${the name of your output files, e.g. ace_test} \
--eval_metric ${micro/macro} 

After training, you will obtain three files in the ./save/${output_file}/ directory:

  • ${output_file}.log records the logging information.
  • ${output_file}.txt records loss, NER and RE results of dev set and test set for each epoch.
  • ${output_file}.pt is the saved model with best average F1 results of NER and RE in the dev set.

Evaluation on Pre-trained Model

The evaluation command-line is listed as follows:

python eval.py \
--data ${NYT/WEBNLG/ADE/ACE2005/ACE2004/SCIERC} \
--eval_metric ${micro/macro} \
--model_file ${the path of saved model you want to evaluate. e.g. save/ace_test.pt} \
--embed_mode ${bert_cased/albert/scibert}

Inference on Customized Input

If you want to evaluate the model with customized input, please run the following code:

python inference.py \
--model_file ${the path of your saved model} \
--sent ${sentence you want to evaluate, str type restricted}

{model_file} must contain information about the datasets the model trained on (web/nyt/ade/ace/sci) and the type of pretrained embedding the model uses (albert/bert/scibert). For example, model_file could be set as "web_bert.pt"

Example

input:
python inference.py \
--model_file save/sci_test_scibert.pt \
--sent "In this work , we present a new framework equipped with a novel recurrent encoder   
        named partition filter encoder designed for multi-task learning ."

result:
entity_name: framework, entity type: Generic
entity_name: recurrent encoder, entity type: Method
entity_name: partition filter encoder, entity type: Method
entity_name: multi-task learning, entity type: Task
triple: recurrent encoder, Used-for, framework
triple: recurrent encoder, Part-of, framework
triple: recurrent encoder, Used-for, multi-task learning
triple: partition filter encoder, Hyponym-of, recurrent encoder
triple: partition filter encoder, Used-for, multi-task learning



input:  
python inference.py \
--model_file save/ace_test_albert.pt \
--sent "As Williams was struggling to gain production and an audience for his work in the late 1930s ,  
        he worked at a string of menial jobs that included a stint as caretaker on a chicken ranch in   
        Laguna Beach , California . In 1939 , with the help of his agent Audrey Wood , Williams was 
        awarded a $1,000 grant from the Rockefeller Foundation in recognition of his play Battle of 
        Angels . It was produced in Boston in 1940 and was poorly received ."

result:
entity_name: Williams, entity type: PER
entity_name: audience, entity type: PER
entity_name: his, entity type: PER
entity_name: he, entity type: PER
entity_name: caretaker, entity type: PER
entity_name: ranch, entity type: FAC
entity_name: Laguna Beach, entity type: GPE
entity_name: California, entity type: GPE
entity_name: his, entity type: PER
entity_name: agent, entity type: PER
entity_name: Audrey Wood, entity type: PER
entity_name: Williams, entity type: PER
entity_name: Rockefeller Foundation, entity type: ORG
entity_name: his, entity type: PER
entity_name: Boston, entity type: GPE
triple: caretaker, PHYS, ranch
triple: ranch, PART-WHOLE, Laguna Beach
triple: Laguna Beach, PART-WHOLE, California

Evaluation on CoNLL04

We also run the test on the dataset CoNLL04, but we did not report the results in our paper due to several reasons:

The command for running CoNLL04 is listed below:

python main.py \
--data CONLL04 \
--do_train \
--do_eval \
--embed_mode albert \
--batch_size 10 \
--lr 0.00002 \
--output_file ${the name of your output files} \
--eval_metric micro \
--clip 1.0 \
--epoch 200

Pre-trained Models and Training Logs

We provide you with pre-trained models for NYT/WEBNLG/ACE2005/ACE2004/SCIERC/CONLL04, along with recorded results of each epoch, identical with training results under the specified configurations above.

Download Links

Due to limited space in google drive, 10-fold model files for ADE are not available to you (training record still available).

After downloading the linked files below, unzip them and put ${data}_test.pt in the directory of ./save/ before running eval.py. Also, ${data}_test.txt and ${data}_test.log records the results of each epoch. You should check that out as well.

Dataset File Size Embedding Download
NYT 393MB Bert-base-cased Link
WebNLG 393MB Bert-base-cased Link
ACE05 815MB Albert-xxlarge-v1 Link
ACE04 3.98GB Albert-xxlarge-v1 Link
SciERC 399MB Scibert-uncased Link
ADE 214KB Bert + Albert Link
CoNLL04 815MB Albert-xxlarge-v1 Link

Result Display

F1 results on NYT/WebNLG/ACE05/SciERC:

Dataset Embedding NER RE
NYT Bert-base-cased 95.8 92.4
WebNLG Bert-base-cased 98.0 93.6
ACE05 Albert-xxlarge-v1 89.0 66.8
SciERC Scibert-uncased 66.8 38.4

F1 results on ACE04:

5-fold 0 1 2 3 4 Average
Albert-NER 89.7 89.9 89.5 89.7 87.6 89.3
Albert-RE 65.5 61.4 63.4 61.5 60.7 62.5

F1 results on CoNLL04:

Model Embedding Micro-NER Micro-RE
Table-sequence Albert-xxlarge-v1 90.1 73.6
PFN Albert-xxlarge-v1 89.6 75.0

F1 results on ADE:

10-fold 0 1 2 3 4 5 6 7 8 9 Average
Bert-NER 89.6 92.3 90.3 88.9 88.8 90.2 90.1 88.5 88.0 88.9 89.6
Bert-RE 80.5 85.8 79.9 79.4 79.3 80.5 80.0 78.1 76.2 79.8 80.0
Albert-NER 91.4 92.9 91.9 91.5 90.7 91.6 91.9 89.9 90.6 90.7 91.3
Albert-RE 83.9 86.8 82.8 83.2 82.2 82.4 84.5 82.3 81.9 82.2 83.2

Robustness Against Input Perturbation

We use robustness test to evaluate our model under adverse circumstances. In this case, we use the domain transformation methods of NER from Textflint.

The test files can be found in the folder of ./robustness_data/. Our reported results are evaluated with the linked ACE2005-albert model above. For each test file, move it to ./data/ACE2005/ and rename it as test_triples.json, then run eval.py with the instructions above.

Citation

Please cite our paper if it's helpful to you in your research.

@misc{yan2021partition,
      title={A Partition Filter Network for Joint Entity and Relation Extraction}, 
      author={Zhiheng Yan and Chong Zhang and Jinlan Fu and Qi Zhang and Zhongyu Wei},
      year={2021},
      eprint={2108.12202},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Owner
zhy
Knowledge Graph, Information Extraction, Interpretability of NLP System
zhy
Artificial Intelligence search algorithm base on Pacman

Pacman Search Artificial Intelligence search algorithm base on Pacman Source The Pacman Projects by the University of California, Berkeley. Layouts Di

Day Fundora 6 Nov 17, 2022
This was initially the repo for the project of [email protected] of Asaf Mazar, Millad Kassaie and Georgios Chochlakis named "Powered by the Will? Exploring Lay Theories of Behavior Change through Social Media"

Subreddit Analysis This repo includes tools for Subreddit analysis, originally developed for our class project of PSYC 626 in USC, titled "Powered by

Georgios Chochlakis 1 Dec 17, 2021
Official implementation of the paper Chunked Autoregressive GAN for Conditional Waveform Synthesis

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Descript 150 Dec 06, 2022
Implementation of Ag-Grid component for Streamlit

streamlit-aggrid AgGrid is an awsome grid for web frontend. More information in https://www.ag-grid.com/. Consider purchasing a license from Ag-Grid i

Pablo Fonseca 556 Dec 31, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Haolin Liu 34 Nov 07, 2022
BackgroundRemover lets you Remove Background from images and video with a simple command line interface

BackgroundRemover BackgroundRemover is a command line tool to remove background from video and image, made by nadermx to power https://BackgroundRemov

Johnathan Nader 1.7k Dec 30, 2022
robomimic: A Modular Framework for Robot Learning from Demonstration

robomimic [Homepage]   [Documentation]   [Study Paper]   [Study Website]   [ARISE Initiative] Latest Updates [08/09/2021] v0.1.0: Initial code and pap

ARISE Initiative 178 Jan 05, 2023
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 05, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
Official PyTorch repo for JoJoGAN: One Shot Face Stylization

JoJoGAN: One Shot Face Stylization This is the PyTorch implementation of JoJoGAN: One Shot Face Stylization. Abstract: While there have been recent ad

1.3k Dec 29, 2022
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.

Subspace Adversarial Training Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust. However,

15 Sep 02, 2022
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 05, 2022
Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021)

UNITE and UNITE+ Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021) Unbalanced Intrinsic Feature Transport for Exemplar-bas

Fangneng Zhan 183 Nov 09, 2022
Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021]

Neural Material Official code repository for the paper: Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021] Henzler, Deschai

Philipp Henzler 80 Dec 20, 2022
UFT - Universal File Transfer With Python

UFT 2.0.0 UFT (Universal File Transfer) is a CLI tool , which can be used to upl

Merwin 1 Feb 18, 2022
AugLiChem - The augmentation library for chemical systems.

AugLiChem Welcome to AugLiChem! The augmentation library for chemical systems. This package supports augmentation for both crystaline and molecular sy

BaratiLab 17 Jan 08, 2023