Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

Overview

To Startup

进入根目录(ner文件夹 或 seg_tag文件夹),执行:

pip install -r requirements.txt

等待环境配置完成

程序入口为main.py文件,执行:

python main.py

seg_tag文件夹中将会一次性输出:

  1. 最大概率分词结果与P、R、F
  2. 最大概率分词(加法平滑)结果与P、R、F
  3. 最大概率分词(Jelinek-Mercer插值法平滑)结果与P、R、F
  4. 最短路分词结果与P、R、F
  5. 词性标注结果与两种评分的P、R、F
  6. 各算法耗时

ner文件夹中将会输出:

  1. 各标签的数量和各自的P、R、F
  2. 测试集上的P、R、F
  3. 混淆矩阵
  4. 算法耗时

自动分词与词性标注部分

文件结构

D:.
│  clean.ipynb # 处理数据集dag.py # 建图dictionary.py # 建立词典main.py # 程序入口mpseg.py # 最大概率分词模块pos.py # 词性标注模块spseg.py # 最短路分词模块requirements.txttrie.py # trie树score.py # 函数
│
├─data # 数据集sequences.txtwordpieces.txt
│          
└─__pycache__

每个模块均经过单元测试和集成测试

代码注释采用Google风格

建立词典

定义class Trie作为词典数据结构,在Trie的尾节点保存该词出现的次数与词性。

使用Trie可以最大化节约空间开销。

定义class Dictionary作为词典,并统计词频、词性、转移矩阵、发射矩阵等。

基于词典的最短路分词

给定句子sentence[N],调用类SPseg中的spcut方法,代码依次执行:

  1. 依据词典建立有向无环图(调用类DAG
  2. 最短路dp (调用dp函数)
  3. 回溯得到最短路径
  4. 返回分词结果

最短路分词获得的是尽可能小的分词集合。

基于统计的最大概率分词

给定句子sentence[N],调用类MPseg中的mpcut方法,代码依次执行:

  1. 依据词典建立有向无环图(调用类DAG
  2. 根据类Dictionary中统计的词频计算边权(边权为该词出现的概率)
  3. 最短路dp (调用dp函数)
  4. 回溯得到最短路径
  5. 返回分词结果

最大概率分词得到的分词结果y满足 $$ y = argmax{P(y|x)} = argmax \frac{P(x|y)P(y)}{P(x)} $$ 其中$P(x), P(x|y)$是常数,即: $$ y & = argmax P(y|x)\ & = argmax P(y) \ & = argmax \prod_1^n P(w_i) \ & = argmax log(\prod_1^n P(w_i))\ & = argmin (- \sum_i^m log(P(w_i)) )\ $$ 最大概率即可等价于在DAG上求边权为$-log(P)$的最短路径

数据平滑

考虑到unseen event,对于频率为0的事件,我们也应分配一定的概率。

代码给出了两种数据平滑方式:

  1. Adding smoothing (加法平滑方法)
  2. Jelinek-Mercer interpolation (JM插值法)

Adding smoothing: $$ P(w_i) = \frac{\delta + c(w_i)}{\delta|V| + \sum_j c(w_j)} $$ 代码中取$\delta = 1$

Jelinek-Mercer interpolation $$ P(w_i) = \lambda P_{ML}(w_i) + (1-\lambda)P_{unif} $$ 思想为n元模型的概率由n元模型和n-1元模型插值而成

代码中取0元模型为均匀分布:$P_{unif} = \frac{1}{|V|}$,并给出$\lambda$的默认值为0.9

基于HMM的词性标注

HMM是一种概率图模型,基于统计学习得到emission matrix和transition matrix,推断给定观测序列(分词结果)的隐状态(词性序列)。

给出分词结果,调用类WordTagging中的tagging方法,代码依次执行:

  1. 根据词频计算发射概率和转移概率
  2. Viterbi decoding,找到具有最大概率的隐状态序列
  3. 回溯,得到隐状态序列

HMM经Viterbi解码得到的词性序列满足: $$ y & = argmax P(y|x)\ & = argmax \frac{P(y)P(x|y)}{P(x)} \ & = argmax P(y)\ & = argmax {\pi[t_i]b_1[w_1] \prod_1^{n-1} a[t_i][t_{i+1}]b_{i+1}[w_{i+1}]} \ & = argmax {log(\pi[t_i]b_1[w_1] \prod_1^{n-1} a[t_i][t_{i+1}]b_{i+1}[w_{i+1}])}\ & = argmin {-( log(\pi[t_i]) + log(b_1[w_1]) + \sum_i^m {log(a[t_i][t_{i+1}])+log(b_{i+1}[w_{i+1}])} )}\ $$

准确率、召回率、F1 score与性能

由公式: $$ P = \frac{系统输出的正确结果}{系统输出的全部结果个数} \ R = \frac{系统输出的正确结果}{测试集中的结果个数} \ F = \frac{2\times P \times R}{P+R} $$ 执行python main.py命令,在测试数据上推断,可得到上述全部分词、词性标注结果,并得到准确率、召回率、F1 score和性能指标

分词准确率:MP(with JM smoothing) = MP(with Add1 smoothing) > MP(no smoothing) = SP

使用平滑技术能得到更好的分词效果,统计方法(MP)比词典法能得到更好的分词效果。

HMM词性标注中,先利用MP(with JM smoothing) 法分词,再对分词结果进行词性标注。同时采用了粗略的评价指标(不考虑顺序)和严格的评价指标(考虑顺序)。

对于给定的长为N的序列:

Methods Inference Time Complexity
MP分词 $O(N+M)$
SP分词 $O(N+M)$
HMM词性标注 $O(T^2N)$

其中,$M$为DAG中的边数,$T$词性总数。因此三个算法的推断复杂度都是线性的

命名实体识别部分

采用BiLSTM+CRF模型

img

其中,BiLSTM输入是给定的sentence(embedding sequence),输出为该词对应的命名实体标签。它通过双向的设置学习到观测序列(输入的字)之间的依赖,在训练过程中,LSTM能够根据目标(比如识别实体)自动提取观测序列的特征。但是,BiLSTM无法学习到输出序列之间的依赖与约束关系。

CRF等同于在BiLSTM的输出上添加了一层约束,使得模型也能学习到输出序列内部之间的的依赖。传统的CRF需要人为给出特征模板,但在该模型中,特征函数将由模型自行学习得到。

文件结构

D:.
│  dataloader.py # 载入数据集evaluation.py # 评估模型main.py # 程序入口model.py # BiLSTM、BiLSTM+CRF模型utils.py # 函数requirements.txt
│
├─data_ner # 数据集dev.char.bmestest.char.bmestrain.char.bmes
│
├─results # 训练好的模型BiLSTM+CRF.pkl
│
└─__pycache__

参数设置

Total epoches Batch size learning rate hidden size embedding size
30 64 0.001 128 128

每结束一个epoch,模型在验证集上评估,选取在验证集上效果最好的模型作为最终模型(optimal model)。

模型在测试集上能达到95%以上的准确率。

Reference

[1] 宗成庆 《统计自然语言处理》

[2] Lample G, Ballesteros M, Subramanian S, et al. Neural architectures for named entity recognition[J]. arXiv preprint arXiv:1603.01360, 2016.

[3] blog: 1. Understanding LSTM Networks -- colah's blog, 2. CRF Layer on the Top of BiLSTM - 1 | CreateMoMo

[4] code: 1. hiyoung123/ChineseSegmentation: 中文分词 (github.com) ,2. luopeixiang/named_entity_recognition: 中文命名实体识别(github.com), 3. Advanced: Making Dynamic Decisions and the Bi-LSTM CRF — PyTorch Tutorials 1.9.1+cu102 documentation

[5] dataset: 1. jiesutd/LatticeLSTM: Chinese NER using Lattice LSTM. Code for ACL 2018 paper. (github.com), 2. 人民日报1998

NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 04, 2023
NLP topic mdel LDA - Gathered from New York Times website

NLP topic mdel LDA - Gathered from New York Times website

1 Oct 14, 2021
Community and sentiment analysis based on tweets

The project has set itself the goal of analyzing the thoughts and interaction of Italian users through the social posts expressed through the Twitter platform on the day of the entry into force of th

3 Nov 17, 2022
Code Generation using a large neural network called GPT-J

CodeGenX is a Code Generation system powered by Artificial Intelligence! It is delivered to you in the form of a Visual Studio Code Extension and is Free and Open-source!

DeepGenX 389 Dec 31, 2022
Original implementation of the pooling method introduced in "Speaker embeddings by modeling channel-wise correlations"

Speaker-Embeddings-Correlation-Pooling This is the original implementation of the pooling method introduced in "Speaker embeddings by modeling channel

Themos Stafylakis 10 Apr 30, 2022
⚡ Automatically decrypt encryptions without knowing the key or cipher, decode encodings, and crack hashes ⚡

Translations 🇩🇪 DE 🇫🇷 FR 🇭🇺 HU 🇮🇩 ID 🇮🇹 IT 🇳🇱 NL 🇧🇷 PT-BR 🇷🇺 RU 🇨🇳 ZH ➡️ Documentation | Discord | Installation Guide ⬅️ Fully autom

11.2k Jan 05, 2023
Python bindings to the dutch NLP tool Frog (pos tagger, lemmatiser, NER tagger, morphological analysis, shallow parser, dependency parser)

Frog for Python This is a Python binding to the Natural Language Processing suite Frog. Frog is intended for Dutch and performs part-of-speech tagging

Maarten van Gompel 46 Dec 14, 2022
Modified GPT using average pooling to reduce the softmax attention memory constraints.

NLP-GPT-Upsampling This repository contains an implementation of Open AI's GPT Model. In particular, this implementation takes inspiration from the Ny

WD 1 Dec 03, 2021
Code for PED: DETR For (Crowd) Pedestrian Detection

Code for PED: DETR For (Crowd) Pedestrian Detection

36 Sep 13, 2022
Code for ACL 2020 paper "Rigid Formats Controlled Text Generation"

SongNet SongNet: SongCi + Song (Lyrics) + Sonnet + etc. @inproceedings{li-etal-2020-rigid, title = "Rigid Formats Controlled Text Generation",

Piji Li 212 Dec 17, 2022
topic modeling on unstructured data in Space news articles retrieved from the Guardian (UK) newspaper using API

NLP Space News Topic Modeling Photos by nasa.gov (1, 2, 3, 4, 5) and extremetech.com Table of Contents Project Idea Data acquisition Primary data sour

edesz 1 Jan 03, 2022
AI_Assistant - This is a Python based Voice Assistant.

This is a Python based Voice Assistant. This was programmed to increase my understanding of python and also how the in-general Voice Assistants work.

1 Jan 06, 2022
Code for the paper "Are Sixteen Heads Really Better than One?"

Are Sixteen Heads Really Better than One? This repository contains code to reproduce the experiments in our paper Are Sixteen Heads Really Better than

Paul Michel 143 Dec 14, 2022
pyMorfologik MorfologikpyMorfologik - Python binding for Morfologik.

Python binding for Morfologik Morfologik is Polish morphological analyzer. For more information see http://github.com/morfologik/morfologik-stemming/

Damian Mirecki 18 Dec 29, 2021
SummerTime - Text Summarization Toolkit for Non-experts

A library to help users choose appropriate summarization tools based on their specific tasks or needs. Includes models, evaluation metrics, and datasets.

Yale-LILY 213 Jan 04, 2023
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 03, 2023
fastai ulmfit - Pretraining the Language Model, Fine-Tuning and training a Classifier

fast.ai ULMFiT with SentencePiece from pretraining to deployment Motivation: Why even bother with a non-BERT / Transformer language model? Short answe

Florian Leuerer 26 May 27, 2022
The projects lets you extract glossary words and their definitions from a given piece of text automatically using NLP techniques

Unsupervised technique to Glossary and Definition Extraction Code Files GPT2-DefinitionModel.ipynb - GPT-2 model for definition generation. Data_Gener

Prakhar Mishra 28 May 25, 2021
Longformer: The Long-Document Transformer

Longformer Longformer and LongformerEncoderDecoder (LED) are pretrained transformer models for long documents. ***** New December 1st, 2020: Longforme

AI2 1.6k Dec 29, 2022
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing

PhoNLP is a multi-task learning model for joint part-of-speech (POS) tagging, named entity recognition (NER) and dependency parsing. Experiments on Vietnamese benchmark datasets show that PhoNLP prod

VinAI Research 109 Dec 02, 2022