💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Overview



Build GitHub

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.

Main features:

  • Train new vocabularies and tokenize, using today's most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Bindings

We provide bindings to the following languages (more to come!):

Quick example using Python:

Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:

from tokenizers import Tokenizer
from tokenizers.models import BPE

tokenizer = Tokenizer(BPE())

You can customize how pre-tokenization (e.g., splitting into words) is done:

from tokenizers.pre_tokenizers import Whitespace

tokenizer.pre_tokenizer = Whitespace()

Then training your tokenizer on a set of files just takes two lines of codes:

from tokenizers.trainers import BpeTrainer

trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)

Once your tokenizer is trained, encode any text with just one line:

output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]

Check the python documentation or the python quicktour to learn more!

You might also like...
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

EasyNMT - Easy to use, state-of-the-art Neural Machine Translation This package provides easy to use, state-of-the-art machine translation for more th

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

Comments
  • Can't import any modules

    Can't import any modules

    What is says on the tin. Every module I try importing into a script is spitting out a "module not found" rror.

    Traceback (most recent call last): File "ab2.py", line 3, in from tokenizers.tools import BertWordPieceTokenizer ImportError: cannot import name 'BertWordPieceTokenizer' from 'tokenizers.tools' (/home/../anaconda3/envs/tokenizers/lib/python3.7/site-packages/tokenizers/tools/init.py)

    Traceback (most recent call last): File "ab2.py", line 3, in from transformers import BertWordPieceTokenizer ImportError: cannot import name 'BertWordPieceTokenizer' from 'transformers' (/home/../anaconda3/envs/tokenizers/lib/python3.7/site-packages/transformers/init.py)

    I've tried:

    import BertWordPieceTokenizer from tokenizers.toold import AutoTokenizer from tokenizers import BartTokenizer

    To Illustrate a few examples.

    I've installed Tokenizers in an anaconda3 venv via pip, via conda forge, and compiled from source.

    I've tried installing Transformers as well and get the same errors. I've tried installing Tokenizers and then installing Transformers and got the same errors.

    I've tried installing Transformers and then Tokenizers and gotten the same error.

    I've looked through the Tokenizers code and unless I'm missing something (entirely possible) autotokenize isn't even a part of the package? I'll admit I'm not a very experienced programmer but I'll be damned if I can find it.

    Help would be appreciated.

    System specs are:

    Linux mint 21.1 RTX2080 ti i78700k

    cudnn 8.1.1 cuda 11.2.0 Tensor rt 7.2.3 Python 3.7 (by the way, figuring out what was needed here, finding the files, and actually installing them was beyond arduous. There has to be a better way. It's the only way I could get anything at all to work though).

    opened by kronkinatorix 1
  • How to decode with the existing tokenizer

    How to decode with the existing tokenizer

    I train the tokenizer following the tutorial of the huggingface:

    from tokenizers import Tokenizer
    from tokenizers.models import BPE
    from tokenizers.trainers import BpeTrainer
    from tokenizers.pre_tokenizers import Whitespace
    
    tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
    trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
    tokenizer.pre_tokenizer = Whitespace()
    files = [f"wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]]
    tokenizer.train(files, trainer)
    tokenizer.save("tokenizer-wiki.json")
    

    But I don't know how to use the existing tokenizer for decoding:

    tokenizer = Tokenizer.from_file("tokenizer-wiki.json")
    o=tokenizer.encode("sd jk sds  sds")
    tokenizer.decode(o.ids)
    # s d j k s ds s ds
    

    I know we can recover the output with the o.offsets, but what if we do not know the offsets, like we are decoding from a language model or NMT.

    opened by ZhiYuanZeng 4
  • Is there any support for 'google/tapas-mini-finetuned-wtq' tokenizer?

    Is there any support for 'google/tapas-mini-finetuned-wtq' tokenizer?

    I'm trying to run a tokenizer in java then eventually compile it to run on android for an open domain question and answer project. I'm wondering why 'google/tapas-mini-finetuned-wtq' doesn't work with DeepJavaLibrary. For more popular models the tokenizer is working. I'm assuming there is no fast tokenizer for tapas, so i was wondering if anyone had any advice on how to go about running tapas tokenizer and model on android/java?

    opened by memetrusidovski 4
  • OpenSSL internal error when importing tokenizers module

    OpenSSL internal error when importing tokenizers module

    When importing tokenizers 0.13.2 or 0.13.1 in a Fips mode enabled environment with Red Hat Enterprise Linux 8.6 (Ootpa) we see this error:

    sh-4.4# python3 -c "import tokenizers"
    fips.c(145): OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURE
    Aborted (core dumped)
    

    Additional info:

    No errors when using tokenizers==0.13.0 or tokenizers==0.11.4
    Python 3.8.13
    OpenSSL 1.1.1g FIPS  21 Apr 2020 or OpenSSL 1.1.1k  FIPS 25 Mar 2021
    
    opened by wai25 3
Releases(v0.13.2)
  • v0.13.2(Nov 7, 2022)

  • python-v0.13.2(Nov 7, 2022)

  • node-v0.13.2(Nov 7, 2022)

  • v0.13.1(Oct 6, 2022)

  • python-v0.13.1(Oct 6, 2022)

  • node-v0.13.1(Oct 6, 2022)

  • python-v0.13.0(Sep 21, 2022)

    [0.13.0]

    • [#956] PyO3 version upgrade
    • [#1055] M1 automated builds
    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible

    Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure.

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Sep 19, 2022)

    [0.13.0]

    • [#1009] unstable_wasm feature to support building on Wasm (it's unstable !)
    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible

    Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure.

    Source code(tar.gz)
    Source code(zip)
  • node-v0.13.0(Sep 19, 2022)

    [0.13.0]

    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible
    Source code(tar.gz)
    Source code(zip)
  • python-v0.12.1(Apr 13, 2022)

  • v0.12.0(Mar 31, 2022)

    [0.12.0]

    Bump minor version because of a breaking change.

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#961] Added link for Ruby port of tokenizers

    • [#960] Feature gate for cli and its clap dependency

    Source code(tar.gz)
    Source code(zip)
  • python-v0.12.0(Mar 31, 2022)

    [0.12.0]

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    Bump minor version because of a breaking change.

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#962] Fix tests for python 3.10

    • [#961] Added link for Ruby port of tokenizers

    Source code(tar.gz)
    Source code(zip)
  • node-v0.12.0(Mar 31, 2022)

    [0.12.0]

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    Bump minor version because of a breaking change. Using 0.12 to match other bindings.

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#961] Added link for Ruby port of tokenizers

    Source code(tar.gz)
    Source code(zip)
  • v0.11.2(Feb 28, 2022)

  • python-v0.11.6(Feb 28, 2022)

  • node-v0.8.3(Feb 28, 2022)

  • python-v0.11.5(Feb 16, 2022)

  • v0.11.1(Jan 17, 2022)

    • [#882] Fixing Punctuation deserialize without argument.
    • [#868] Fixing missing direction in TruncationParams
    • [#860] Adding TruncationSide to TruncationParams
    Source code(tar.gz)
    Source code(zip)
  • python-v0.11.3(Jan 17, 2022)

    • [#882] Fixing Punctuation deserialize without argument.
    • [#868] Fixing missing direction in TruncationParams
    • [#860] Adding TruncationSide to TruncationParams
    Source code(tar.gz)
    Source code(zip)
  • node-v0.8.2(Jan 17, 2022)

  • node-v0.8.1(Jan 17, 2022)

  • python-v0.11.4(Jan 17, 2022)

  • python-v0.11.2(Jan 4, 2022)

  • python-v0.11.1(Dec 28, 2021)

  • python-v0.11.0(Dec 24, 2021)

    Fixed

    • [#585] Conda version should now work on old CentOS
    • [#844] Fixing interaction between is_pretokenized and trim_offsets.
    • [#851] Doc links

    Added

    • [#657]: Add SplitDelimiterBehavior customization to Punctuation constructor
    • [#845]: Documentation for Decoders.

    Changed

    • [#850]: Added a feature gate to enable disabling http features
    • [#718]: Fix WordLevel tokenizer determinism during training
    • [#762]: Add a way to specify the unknown token in SentencePieceUnigramTokenizer
    • [#770]: Improved documentation for UnigramTrainer
    • [#780]: Add Tokenizer.from_pretrained to load tokenizers from the Hugging Face Hub
    • [#793]: Saving a pretty JSON file by default when saving a tokenizer
    Source code(tar.gz)
    Source code(zip)
  • node-v0.8.0(Sep 2, 2021)

    BREACKING CHANGES

    • Many improvements on the Trainer (#519). The files must now be provided first when calling tokenizer.train(files, trainer).

    Features

    • Adding the TemplateProcessing
    • Add WordLevel and Unigram models (#490)
    • Add nmtNormalizer and precompiledNormalizer normalizers (#490)
    • Add templateProcessing post-processor (#490)
    • Add digitsPreTokenizer pre-tokenizer (#490)
    • Add support for mapping to sequences (#506)
    • Add splitPreTokenizer pre-tokenizer (#542)
    • Add behavior option to the punctuationPreTokenizer (#657)
    • Add the ability to load tokenizers from the Hugging Face Hub using fromPretrained (#780)

    Fixes

    • Fix a bug where long tokenizer.json files would be incorrectly deserialized (#459)
    • Fix RobertaProcessing deserialization in PostProcessorWrapper (#464)
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.3(May 24, 2021)

    Fixed

    • [#686]: Fix SPM conversion process for whitespace deduplication
    • [#707]: Fix stripping strings containing Unicode characters

    Added

    • [#693]: Add a CTC Decoder for Wave2Vec models

    Removed

    • [#714]: Removed support for Python 3.5
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.2(Apr 5, 2021)

    Fixed

    • [#652]: Fix offsets for Precompiled corner case
    • [#656]: Fix BPE continuing_subword_prefix
    • [#674]: Fix Metaspace serialization problems
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.1(Feb 4, 2021)

    Fixed

    • [#616]: Fix SentencePiece tokenizers conversion
    • [#617]: Fix offsets produced by Precompiled Normalizer (used by tokenizers converted from SPM)
    • [#618]: Fix Normalizer.normalize with PyNormalizedStringRefMut
    • [#620]: Fix serialization/deserialization for overlapping models
    • [#621]: Fix ByteLevel instantiation from a previously saved state (using __getstate__())
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.0(Jan 12, 2021)

    Added

    • [#508]: Add a Visualizer for notebooks to help understand how the tokenizers work
    • [#519]: Add a WordLevelTrainer used to train a WordLevel model
    • [#533]: Add support for conda builds
    • [#542]: Add Split pre-tokenizer to easily split using a pattern
    • [#544]: Ability to train from memory. This also improves the integration with datasets
    • [#590]: Add getters/setters for components on BaseTokenizer
    • [#574]: Add fust_unk option to SentencePieceBPETokenizer

    Changed

    • [#509]: Automatically stubbing the .pyi files
    • [#519]: Each Model can return its associated Trainer with get_trainer()
    • [#530]: The various attributes on each component can be get/set (ie. tokenizer.model.dropout = 0.1)
    • [#538]: The API Reference has been improved and is now up-to-date.

    Fixed

    • [#519]: During training, the Model is now trained in-place. This fixes several bugs that were forcing to reload the Model after a training.
    • [#539]: Fix BaseTokenizer enable_truncation docstring
    Source code(tar.gz)
    Source code(zip)
Owner
Hugging Face
Solving NLP, one commit at a time!
Hugging Face
Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Ubiquitous Knowledge Processing Lab 9.1k Jan 02, 2023
fastai ulmfit - Pretraining the Language Model, Fine-Tuning and training a Classifier

fast.ai ULMFiT with SentencePiece from pretraining to deployment Motivation: Why even bother with a non-BERT / Transformer language model? Short answe

Florian Leuerer 26 May 27, 2022
Library for Russian imprecise rhymes generation

TOM RHYMER Library for Russian imprecise rhymes generation. Quick Start Generate rhymes by any given rhyme scheme (aabb, abab, aaccbb, etc ...): from

Alexey Karnachev 6 Oct 18, 2022
SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。

SimpleChinese2 SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。 声明 本项目是为方便个人工作所创建的,仅有部分代码原创。

Ming 30 Dec 02, 2022
189 Jan 02, 2023
Baseline code for Korean open domain question answering(ODQA)

Open-Domain Question Answering(ODQA)는 다양한 주제에 대한 문서 집합으로부터 자연어 질의에 대한 답변을 찾아오는 task입니다. 이때 사용자 질의에 답변하기 위해 주어지는 지문이 따로 존재하지 않습니다. 따라서 사전에 구축되어있는 Knowl

VUMBLEB 69 Nov 04, 2022
Python port of Google's libphonenumber

phonenumbers Python Library This is a Python port of Google's libphonenumber library It supports Python 2.5-2.7 and Python 3.x (in the same codebase,

David Drysdale 3.1k Dec 29, 2022
Open solution to the Toxic Comment Classification Challenge

Starter code: Kaggle Toxic Comment Classification Challenge More competitions 🎇 Check collection of public projects 🎁 , where you can find multiple

minerva.ml 153 Jun 22, 2022
PyTorch implementation of Microsoft's text-to-speech system FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.

An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"

Chung-Ming Chien 1k Dec 30, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 37 Jan 04, 2023
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
Automated question generation and question answering from Turkish texts using text-to-text transformers

Turkish Question Generation Offical source code for "Automated question generation & question answering from Turkish texts using text-to-text transfor

Open Business Software Solutions 29 Dec 14, 2022
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 04, 2023
Conditional probing: measuring usable information beyond a baseline

Conditional probing: measuring usable information beyond a baseline

John Hewitt 20 Dec 15, 2022
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Max Woolf 3.1k Jan 07, 2023
A flask application to predict the speech emotion of any .wav file.

This is a speech emotion recognition app. It will allow you to train a modular MLP model with the RAVDESS dataset, and then use that model with a flask application to predict the speech emotion of an

Aryan Vijaywargia 2 Dec 15, 2021
中文生成式预训练模型

T5 PEGASUS 中文生成式预训练模型,以mT5为基础架构和初始权重,通过类似PEGASUS的方式进行预训练。 详情可见:https://kexue.fm/archives/8209 Tokenizer 我们将T5 PEGASUS的Tokenizer换成了BERT的Tokenizer,它对中文更

410 Jan 03, 2023
Write Python in Urdu - اردو میں کوڈ لکھیں

UrduPython Write simple Python in Urdu. How to Use Write Urdu code in سامپل۔پے The mappings are as following: "۔": ".", "،":

Saad A. Bazaz 26 Nov 27, 2022
Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering

Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2

Google Research Datasets 52 Jun 21, 2022
A simple Speech Emotion Recognition (SER) API created using Flask and running in a Docker container.

keyword_searching Steps to use this Python scripts: (1)Paste this script into the file folder containing the PDF files you need to search from; (2)Thi

2 Nov 11, 2022