Neural network sequence labeling model

Overview

Sequence labeler

This is a neural network sequence labeling system. Given a sequence of tokens, it will learn to assign labels to each token. Can be used for named entity recognition, POS-tagging, error detection, chunking, CCG supertagging, etc.

The main model implements a bidirectional LSTM for sequence tagging. In addition, you can incorporate character-level information -- either by concatenating a character-based representation, or by using an attention/gating mechanism for combining it with a word embedding.

Run with:

python experiment.py config.conf

Preferably with Tensorflow set up to use CUDA, so the process can run on a GPU. The script will train the model on the training data, test it on the test data, and print various evaluation metrics.

Note: The original sequence labeler was implemented in Theano, but since Theano is soon ending support, I have reimplemented it in TensorFlow. I also used the chance to refactor the code a bit, and it should be better in every way. However, if you need the specific code used in previously published papers, you'll need to refer to older commits.

Requirements

  • python (tested with 2.7.12 and 3.5.2)
  • numpy (tested with 1.13.3 and 1.14.0)
  • tensorflow (tested with 1.3.0 and 1.4.1)

Data format

The training and test data is expected in standard CoNLL-type tab-separated format. One word per line, separate column for token and label, empty line between sentences.

For error detection, this would be something like:

I       c
saws    i
the     c
show    c

The first column is assumed to be the token and the last column is the label. There can be other columns in the middle, which are currently not used. For example:

EU      NNP     I-NP    S-ORG
rejects VBZ     I-VP    O
German  JJ      I-NP    S-MISC
call    NN      I-NP    O
to      TO      I-VP    O
boycott VB      I-VP    O
British JJ      I-NP    S-MISC
lamb    NN      I-NP    O
.       .       O       O

Configuration

Edit the values in config.conf as needed:

  • path_train - Path to the training data, in CoNLL tab-separated format. One word per line, first column is the word, last column is the label. Empty lines between sentences.
  • path_dev - Path to the development data, used for choosing the best epoch.
  • path_test - Path to the test file. Can contain multiple files, colon separated.
  • conll_eval - Whether the standard CoNLL NER evaluation should be run.
  • main_label - The output label for which precision/recall/F-measure are calculated. Does not affect accuracy or measures from the CoNLL eval.
  • model_selector - What is measured on the dev set for model selection: "dev_conll_f:high" for NER and chunking, "dev_acc:high" for POS-tagging, "dev_f05:high" for error detection.
  • preload_vectors - Path to the pretrained word embeddings, in word2vec plain text format. If your embeddings are in binary, you can use convertvec to convert them to plain text.
  • word_embedding_size - Size of the word embeddings used in the model.
  • crf_on_top - If True, use a CRF as the output layer. If False, use softmax instead.
  • emb_initial_zero - Whether word embeddings should have zero initialisation by default.
  • train_embeddings - Whether word embeddings should be updated during training.
  • char_embedding_size - Size of the character embeddings.
  • word_recurrent_size - Size of the word-level LSTM hidden layers.
  • char_recurrent_size - Size of the char-level LSTM hidden layers.
  • hidden_layer_size - Size of the extra hidden layer on top of the bi-LSTM.
  • char_hidden_layer_size - Size of the extra hidden layer on top of the character-based component.
  • lowercase - Whether words should be lowercased when mapping to word embeddings.
  • replace_digits - Whether all digits should be replaced by 0.
  • min_word_freq - Minimal frequency of words to be included in the vocabulary. Others will be considered OOV.
  • singletons_prob - The probability of mapping words that appear only once to OOV instead during training.
  • allowed_word_length - Maximum allowed word length, clipping the rest. Can be necessary if the text contains unreasonably long tokens, eg URLs.
  • max_train_sent_length - Discard sentences longer than this limit when training.
  • vocab_include_devtest - Load words from dev and test sets also into the vocabulary. If they don't appear in the training set, they will have the default representations from the preloaded embeddings.
  • vocab_only_embedded - Whether the vocabulary should contain only words in the pretrained embedding set.
  • initializer - The method used to initialize weight matrices in the network.
  • opt_strategy - The method used for weight updates.
  • learningrate - Learning rate.
  • clip - Clip the gradient to a range.
  • batch_equal_size - Create batches of sentences with equal length.
  • epochs - Maximum number of epochs to run.
  • stop_if_no_improvement_for_epochs - Training will be stopped if there has been no improvement for n epochs.
  • learningrate_decay - If performance hasn't improved for 3 epochs, multiply the learning rate with this value.
  • dropout_input - The probability for applying dropout to the word representations. 0.0 means no dropout.
  • dropout_word_lstm - The probability for applying dropout to the LSTM outputs.
  • tf_per_process_gpu_memory_fraction - The fraction of GPU memory that the process can use.
  • tf_allow_growth - Whether the GPU memory usage can grow dynamically.
  • main_cost - Control the weight of the main labeling objective.
  • lmcost_max_vocab_size = Maximum vocabulary size for the language modeling loss. The remaining words are mapped to a single entry.
  • lmcost_hidden_layer_size = Hidden layer size for the language modeling loss.
  • lmcost_gamma - Weight for the language modeling loss.
  • char_integration_method - How character information is integrated. Options are: "none" (not integrated), "concat" (concatenated), "attention" (the method proposed in Rei et al. (2016)).
  • save - Path to save the model.
  • load - Path to load the model.
  • garbage_collection - Whether garbage collection is explicitly called. Makes things slower but can operate with bigger models.
  • lstm_use_peepholes - Whether to use the LSTM implementation with peepholes.
  • random_seed - Random seed for initialisation and data shuffling. This can affect results, so for robust conclusions I recommend running multiple experiments with different seeds and averaging the metrics.

Printing output

There is now a separate script for loading a saved model and using it to print output for a given input file. Use the save option in the config file for saving the model. The input file needs to be in the same format as the training data (one word per line, labels in a separate column). The labels are expected for printing output as well. If you don't know the correct labels, just print any valid label in that field.

To print the output, run:

python print_output.py labels model_file input_file

This will print the input file to standard output, with an extra column at the end that shows the prediction.

You can also use:

python print_output.py probs model_file input_file

This will print the individual probabilities for each of the possible labels. If the model is using CRFs, the probs option will output unnormalised state scores without taking the transitions into account.

References

The main sequence labeling model is described here:

Compositional Sequence Labeling Models for Error Detection in Learner Writing
Marek Rei and Helen Yannakoudakis
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-2016)

The character-level component is described here:

Attending to characters in neural sequence labeling models
Marek Rei, Gamal K.O. Crichton and Sampo Pyysalo
In Proceedings of the 26th International Conference on Computational Linguistics (COLING-2016)

The language modeling objective is described here:

Semi-supervised Multitask Learning for Sequence Labeling
Marek Rei
In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL-2017)

The CRF implementation is based on:

Neural Architectures for Named Entity Recognition
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami and Chris Dyer
In Proceedings of NAACL-HLT 2016

The conlleval.py script is from: https://github.com/spyysalo/conlleval.py

License

The code is distributed under the Affero General Public License 3 (AGPL-3.0) by default. If you wish to use it under a different license, feel free to get in touch.

Copyright (c) 2018 Marek Rei

This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

Owner
Marek Rei
Researcher in machine learning and natural language processing.
Marek Rei
Full Spectrum Bioinformatics - a free online text designed to introduce key topics in Bioinformatics using the Python

Full Spectrum Bioinformatics is a free online text designed to introduce key topics in Bioinformatics using the Python programming language. The text is written in interactive Jupyter Notebooks, whic

Jesse Zaneveld 33 Dec 28, 2022
Share constant definitions between programming languages and make your constants constant again

Introduction Reconstant lets you share constant and enum definitions between programming languages. Constants are defined in a yaml file and converted

Natan Yellin 47 Sep 10, 2022
A PyTorch-based model pruning toolkit for pre-trained language models

English | 中文说明 TextPruner是一个为预训练语言模型设计的模型裁剪工具包,通过轻量、快速的裁剪方法对模型进行结构化剪枝,从而实现压缩模型体积、提升模型速度。 其他相关资源: 知识蒸馏工具TextBrewer:https://github.com/airaria/TextBrewe

Ziqing Yang 231 Jan 08, 2023
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System Authors: Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai

Amazon Web Services - Labs 124 Jan 03, 2023
Datasets of Automatic Keyphrase Extraction

This repository contains 20 annotated datasets of Automatic Keyphrase Extraction made available by the research community. Following are the datasets and the original papers that proposed them. If yo

LIAAD - Laboratory of Artificial Intelligence and Decision Support 163 Dec 23, 2022
LewusBot - Twitch ChatBot built in python with twitchio library

LewusBot Twitch ChatBot built in python with twitchio library. Uses twitch/leagu

Lewus 25 Dec 04, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
A demo of chinese asr

chinese_asr_demo 一个端到端的中文语音识别模型训练、测试框架 具备数据预处理、模型训练、解码、计算wer等等功能 训练数据 训练数据采用thchs_30,

4 Dec 09, 2021
This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text.

Text Summarizer This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text. Team Members This mini-project was

1 Nov 16, 2021
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
Hostapd-mac-tod-acl - Setup a hostapd AP with MAC ToD ACL

A brief explanation This script provides a quick way to setup a Time-of-day (Tod

2 Feb 03, 2022
DAGAN - Dual Attention GANs for Semantic Image Synthesis

Contents Semantic Image Synthesis with DAGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evalu

Hao Tang 104 Oct 08, 2022
ProtFeat is protein feature extraction tool that utilizes POSSUM and iFeature.

Description: ProtFeat is designed to extract the protein features by employing POSSUM and iFeature python-based tools. ProtFeat includes a total of 39

GOKHAN OZSARI 5 Dec 16, 2022
This repository details the steps in creating a Part of Speech tagger using Trigram Hidden Markov Models and the Viterbi Algorithm without using external libraries.

POS-Tagger This repository details the creation of a Part-of-Speech tagger using Trigram Hidden Markov Models to predict word tags in a word sequence.

Raihan Ahmed 1 Dec 09, 2021
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 03, 2023
2021语言与智能技术竞赛:机器阅读理解任务

LICS2021 MRC 1. 项目&任务介绍 本项目基于官方给定的baseline(DuReader-Checklist-BASELINE)进行二次改造,对整个代码框架做了简单的重构,对核心网络结构添加了注释,解耦了数据读取的模块,并添加了阈值确认的功能,一些小的细节也做了改进。 本次任务为202

roar 29 Dec 05, 2022
Opal-lang - A WIP programming language based on Python

thanks to aphitorite for the beautiful logo! opal opal is a WIP transcompiled pr

3 Nov 04, 2022
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 07, 2023
Code for ACL 2021 main conference paper "Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances".

Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances This repository contains the code and pre-trained mode

ICTNLP 90 Dec 27, 2022
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 1.1k Dec 27, 2022