A framework for cleaning Chinese dialog data

Overview

本项目为一个清洗对话数据的多线程框架,针对知乎、微博、贴吧等。 目前还比较简陋,欢迎提bug和优化,比如句内重复短语降重函数的正则或者后缀算法。 代码还在继续完善中,注释以及一些函数出处引用等待完善。

目录结构

--scripts: 存放运行脚本
  ---run.sh: 使用我挑选的几个规则来运行run_dist.py  
--src: 清洗框架功能主目录  
  ---inputters: 存放dataloader 和 存取数据工具函数
  ---rules: 存放各级别的规则函数
  ---single_filter.py: run_dist.py所调用的单个线程的主程序,加载处理单个数据,并保存过滤后的数据以及脏   
---tool_data: 存放黑名单词典,每行一个词  
---run_dist.py: 主运行文件,加载dataloader,加载黑名单,简历线程池 
---utils: 数据统计,结果检测

运行并保存日志

bash ./scripts/run.sh 2>&1 | tee -a cleaning.log

Rules

规则包括目前大部分论文内的清洗规则:

1 黑名单过滤,包括特殊字符和脏话
2 emoji表情
3 邮箱、电话号等隐私过滤, 人名 替换为NAME1、NAME2。。。
4 URL过滤
5 unicode 相关修复
6 去重:包括重复词缩减、过滤掉上下文相同的句子、重复的对话
7 meena以及dialogpt中使用的广告、通用回复筛除

以上识别出来的噪音,如可在句中抹去则抹去。
如不可抹去则放弃该句子:即,若是单轮对话放弃该对话,若是多轮对话则以该句为分割,切分对话。

NOTE THAT: 1, 改动某规则的时候注意是否影响到其他规则, 规则清洗顺序有要求 2, 黑名单如人名、特殊话题等可根据需要配置放置到 ./tool_data/下,文件命名可自行配置请参阅。/run_dist.py中dataloader。黑名单可到github上搜寻,如 https://github.com/fighting41love/funNLP 3, 将在每个函数上方给定测试样例,下方给定期待样例 4, 目前run.sh中使用的参数为本人正在使用的功能

Auguments

参数 描述
n_p 多进程数
batch_size 单个进程最大处理session数
tool_dir 工具数据所在目录(如黑名单)
out_dir 清洗后的文件输出目录
raw_dir 待处理文件所在mull
dirty_dir 存储清洗出来的脏数据,如为空则不存
:--------------- :-------------------
split_multi_repost 将微博转发数据按"//@aaa XXXX //@bbb XXX"撕开成多句
no_utter_dup 如果 context == response 则去掉该对话
re_name 人名用 , ...替换
no_ad 去除可能是广告的对话(同样的回复对应多个context)借鉴论文
de_generic_dialog 去通用回复 借鉴论文
no_short_response 去掉对话尾部所有过短回复
:--------------- :-------------------
bert_clean 使用BertTokenizer 中函数清理句子
cleantext_clean 使用clean-text 清理 (电话号、邮箱、unicode错误等)
:--------------- :-------------------
no_short 去除过短的句子
no_long 去除过长的句子
de_reply_tag 去除微博中 "回复 @XXX:"
de_hashtag 去除句中 "# XXX#"
de_emotion 去除句中 ": XXX:"
de_mention 去除句子中 "@Cindy", "@Bob:", "@Amy:" 等
no_mention 去除包含 @XXX 的句子
de_repost 去除句中 "//XXX"
de_duplicated 句中短语降重 (待用后缀算法优化)
de_emoji 去除emoji (代补全)
no_special_topic 过滤包含特定名单词的对话对话
no_str_blacklist 过滤包含黑名单词的对话
no_toupiao 判断是否是微博投票
no_specific_utter 删除一些特定句子
contain_zh 删掉不包含中文的句子
de_single_repost_mention 去掉 "@XXX:"
de_weibo_url 去除 http:\t.c
de_url 去除 url
de_angle 去除 其中XX为非中文
de_alpha_num 去除长串无意义的数字字母组合
de_specific 去除句中固定pattern
:--------------- :-------------------
de_showall 去除某些特定文件中的 "...显示全部"
de_brackets 去除某些特定文件中的 "[XXX]"
:--------------- :-------------------
no_word_blacklist 过滤分此后的黑名单词的对话
no_alpha_noise 过滤掉含有不成 英文单词的 字母组合 的句子
check_confuse_word 保存包含混淆名单词的对话进行recall
yda_dedupl 如果一个词语在句子中出现的比例 超过一个阈值则放弃该句子
Owner
Yida
Yida
BERT-based Financial Question Answering System

BERT-based Financial Question Answering System In this example, we use Jina, PyTorch, and Hugging Face transformers to build a production-ready BERT-b

Bithiah Yuan 61 Sep 18, 2022
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
Write Python in Urdu - اردو میں کوڈ لکھیں

UrduPython Write simple Python in Urdu. How to Use Write Urdu code in سامپل۔پے The mappings are as following: "۔": ".", "،":

Saad A. Bazaz 26 Nov 27, 2022
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 11.4k Jan 01, 2023
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Habib Abdurrasyid 5 Dec 28, 2021
LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation

LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation Tasks | Datasets | LongLM | Baselines | Paper Introduction LOT is a ben

46 Dec 28, 2022
Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Blackstone Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project f

ICLR&D 579 Jan 08, 2023
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
hashily is a Python module that provides a variety of text decoding and encoding operations.

hashily is a python module that performs a variety of text decoding and encoding functions. It also various functions for encrypting and decrypting text using various ciphers.

DevMysT 5 Jul 17, 2022
A modular Karton Framework service that unpacks common packers like UPX and others using the Qiling Framework.

Unpacker Karton Service A modular Karton Framework service that unpacks common packers like UPX and others using the Qiling Framework. This project is

c3rb3ru5 45 Jan 05, 2023
Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks

Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks. It takes raw videos/images + text as inputs, and outputs task predictions. ClipB

Jie Lei 雷杰 612 Jan 04, 2023
A toolkit for document-level event extraction, containing some SOTA model implementations

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker Source code for ACL-IJCNLP 2021 Long paper: Document-le

84 Dec 15, 2022
基于百度的语音识别,用python实现,pyaudio+pyqt

Speech-recognition 基于百度的语音识别,python3.8(conda)+pyaudio+pyqt+baidu-aip 百度有面向python

J-L 1 Jan 03, 2022
A natural language modeling framework based on PyTorch

Overview PyText is a deep-learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapi

Facebook Research 6.4k Dec 27, 2022
This repository has a implementations of data augmentation for NLP for Japanese.

daaja This repository has a implementations of data augmentation for NLP for Japanese: EDA: Easy Data Augmentation Techniques for Boosting Performance

Koga Kobayashi 60 Nov 11, 2022
Finally, some decent sample sentences

tts-dataset-prompts This repository aims to be a decent set of sentences for people looking to clone their own voices (e.g. using Tacotron 2). Each se

hecko 19 Dec 13, 2022
Experiments in converting wikidata to ftm

FollowTheMoney / Wikidata mappings This repo will contain tools for converting Wikidata entities into FtM schema. Prefixes: https://www.mediawiki.org/

Friedrich Lindenberg 2 Nov 12, 2021
Simple telegram bot to convert files into direct download link.you can use telegram as a file server 🪁

TGCLOUD 🪁 Simple telegram bot to convert files into direct download link.you can use telegram as a file server 🪁 Features Easy to Deploy Heroku Supp

Mr.Acid dev 6 Oct 18, 2022
A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.

Crosslingual Coreference Coreference is amazing but the data required for training a model is very scarce. In our case, the available training for non

Pandora Intelligence 71 Jan 04, 2023