Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

Overview

textgenrnn

dank text

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly train on a text using a pretrained model.

textgenrnn is a Python 3 module on top of Keras/TensorFlow for creating char-rnns, with many cool features:

  • A modern neural network architecture which utilizes new techniques as attention-weighting and skip-embedding to accelerate training and improve model quality.
  • Train on and generate text at either the character-level or word-level.
  • Configure RNN size, the number of RNN layers, and whether to use bidirectional RNNs.
  • Train on any generic input text file, including large files.
  • Train models on a GPU and then use them to generate text with a CPU.
  • Utilize a powerful CuDNN implementation of RNNs when trained on the GPU, which massively speeds up training time as opposed to typical LSTM implementations.
  • Train the model using contextual labels, allowing it to learn faster and produce better results in some cases.

You can play with textgenrnn and train any text file with a GPU for free in this Colaboratory Notebook! Read this blog post or watch this video for more information!

Examples

from textgenrnn import textgenrnn

textgen = textgenrnn()
textgen.generate()
[Spoiler] Anyone else find this post and their person that was a little more than I really like the Star Wars in the fire or health and posting a personal house of the 2016 Letter for the game in a report of my backyard.

The included model can easily be trained on new texts, and can generate appropriate text even after a single pass of the input data.

textgen.train_from_file('hacker_news_2000.txt', num_epochs=1)
textgen.generate()
Project State Project Firefox

The model weights are relatively small (2 MB on disk), and they can easily be saved and loaded into a new textgenrnn instance. As a result, you can play with models which have been trained on hundreds of passes through the data. (in fact, textgenrnn learns so well that you have to increase the temperature significantly for creative output!)

textgen_2 = textgenrnn('/weights/hacker_news.hdf5')
textgen_2.generate(3, temperature=1.0)
Why we got money “regular alter”

Urburg to Firefox acquires Nelf Multi Shamn

Kubernetes by Google’s Bern

You can also train a new model, with support for word level embeddings and bidirectional RNN layers by adding new_model=True to any train function.

Interactive Mode

It's also possible to get involved in how the output unfolds, step by step. Interactive mode will suggest you the top N options for the next char/word, and allows you to pick one.

When running textgenrnn in the terminal, pass interactive=True and top=N to generate. N defaults to 3.

from textgenrnn import textgenrnn

textgen = textgenrnn()
textgen.generate(interactive=True, top_n=5)

word_level_demo

This can add a human touch to the output; it feels like you're the writer! (reference)

Usage

textgenrnn can be installed from pypi via pip:

pip3 install textgenrnn

For the latest textgenrnn, you must have a minimum TensorFlow version of 2.1.0.

You can view a demo of common features and model configuration options in this Jupyter Notebook.

/datasets contains example datasets using Hacker News/Reddit data for training textgenrnn.

/weights contains further-pretrained models on the aforementioned datasets which can be loaded into textgenrnn.

/outputs contains examples of text generated from the above pretrained models.

Neural Network Architecture and Implementation

textgenrnn is based off of the char-rnn project by Andrej Karpathy with a few modern optimizations, such as the ability to work with very small text sequences.

default model

The included pretrained-model follows a neural network architecture inspired by DeepMoji. For the default model, textgenrnn takes in an input of up to 40 characters, converts each character to a 100-D character embedding vector, and feeds those into a 128-cell long-short-term-memory (LSTM) recurrent layer. Those outputs are then fed into another 128-cell LSTM. All three layers are then fed into an Attention layer to weight the most important temporal features and average them together (and since the embeddings + 1st LSTM are skip-connected into the attention layer, the model updates can backpropagate to them more easily and prevent vanishing gradients). That output is mapped to probabilities for up to 394 different characters that they are the next character in the sequence, including uppercase characters, lowercase, punctuation, and emoji. (if training a new model on a new dataset, all of the numeric parameters above can be configured)

context model

Alternatively, if context labels are provided with each text document, the model can be trained in a contextual mode, where the model learns the text given the context so the recurrent layers learn the decontextualized language. The text-only path can piggy-back off the decontextualized layers; in all, this results in much faster training and better quantitative and qualitative model performance than just training the model gien the text alone.

The model weights included with the package are trained on hundreds of thousands of text documents from Reddit submissions (via BigQuery), from a very diverse variety of subreddits. The network was also trained using the decontextual approach noted above in order to both improve training performance and mitigate authorial bias.

When fine-tuning the model on a new dataset of texts using textgenrnn, all layers are retrained. However, since the original pretrained network has a much more robust "knowledge" initially, the new textgenrnn trains faster and more accurately in the end, and can potentially learn new relationships not present in the original dataset (e.g. the pretrained character embeddings include the context for the character for all possible types of modern internet grammar).

Additionally, the retraining is done with a momentum-based optimizer and a linearly decaying learning rate, both of which prevent exploding gradients and makes it much less likely that the model diverges after training for a long time.

Notes

  • You will not get quality generated text 100% of the time, even with a heavily-trained neural network. That's the primary reason viral blog posts/Twitter tweets utilizing NN text generation often generate lots of texts and curate/edit the best ones afterward.

  • Results will vary greatly between datasets. Because the pretrained neural network is relatively small, it cannot store as much data as RNNs typically flaunted in blog posts. For best results, use a dataset with at least 2,000-5,000 documents. If a dataset is smaller, you'll need to train it for longer by setting num_epochs higher when calling a training method and/or training a new model from scratch. Even then, there is currently no good heuristic for determining a "good" model.

  • A GPU is not required to retrain textgenrnn, but it will take much longer to train on a CPU. If you do use a GPU, I recommend increasing the batch_size parameter for better hardware utilization.

Future Plans for textgenrnn

  • More formal documentation

  • A web-based implementation using tensorflow.js (works especially well due to the network's small size)

  • A way to visualize the attention-layer outputs to see how the network "learns."

  • A mode to allow the model architecture to be used for chatbot conversations (may be released as a separate project)

  • More depth toward context (positional context + allowing multiple context labels)

  • A larger pretrained network which can accommodate longer character sequences and a more indepth understanding of language, creating better generated sentences.

  • Hierarchical softmax activation for word-level models (once Keras has good support for it).

  • FP16 for superfast training on Volta/TPUs (once Keras has good support for it).

Articles/Projects using textgenrnn

Articles

Projects

Tweets

Maintainer/Creator

Max Woolf (@minimaxir)

Max's open-source projects are supported by his Patreon. If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use.

Credits

Andrej Karpathy for the original proposal of the char-rnn via the blog post The Unreasonable Effectiveness of Recurrent Neural Networks.

Daniel Grijalva for contributing an interactive mode.

License

MIT

Attention-layer code used from DeepMoji (MIT Licensed)

Comments
  • Colab notebook gives Keras multi_gpu_model error

    Colab notebook gives Keras multi_gpu_model error

    Hi, the Collab notebook linked on the main page no longer works. I submitted the tensorflow cell and then the the "!pip install" cell, and it downloaded some kind of wheel (which I've never seen it do before). Then it burped out the following error:

    ImportError: cannot import name 'multi_gpu_model' from 'tensorflow.keras.utils' (/usr/local/lib/python3.7/dist-packages/tensorflow/keras/utils/init.py)

    It appears some people have solved this by downgrading keras to 2.2.4, but either it didn't work for me - or more likely, I don't know how/where to do this in the notebook. Any ideas?

    opened by peterfarted 10
  • Using 0% of GPU

    Using 0% of GPU

    It's purely using my CPU. I can't find anything about GPU settings other than multi_gpu, which spits out an error saying "Division by zero."

    Is there anything I'm missing?

    Using version 1.4.1 straight from Pip, and I have tensorflow-gpu installed.

    opened by kast1450 10
  • error when train_on_texts instead of train_from_largetext_file: AttributeError: 'numpy.ndarray' object has no attribute 'lower'

    error when train_on_texts instead of train_from_largetext_file: AttributeError: 'numpy.ndarray' object has no attribute 'lower'

    Hi, I tried to train your example at https://github.com/minimaxir/textgenrnn/blob/master/docs/textgenrnn-demo.ipynb

    my code looks like:

    from textgenrnn import textgenrnn
    
    textgen = textgenrnn()
    textgen.reset()
    texts = ['Never gonna give you up, never gonna let you down',
            'Never gonna run around and desert you',
            'Never gonna make you cry, never gonna say goodbye',
            'Never gonna tell a lie and hurt you']
    
    
    textgen.train_on_texts(texts, num_epochs=2,  gen_epochs=2)
    

    I get the following output:

    Using TensorFlow backend.
    2018-08-01 15:40:15.286776: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    2018-08-01 15:40:15.356998: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2018-08-01 15:40:15.357329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties: 
    name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.683
    pciBusID: 0000:01:00.0
    totalMemory: 7.93GiB freeMemory: 7.61GiB
    2018-08-01 15:40:15.357343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
    2018-08-01 15:40:17.171291: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
    2018-08-01 15:40:17.171337: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958]      0 
    2018-08-01 15:40:17.171348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   N 
    2018-08-01 15:40:17.171563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7343 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
    Training on 174 character sequences.
    Epoch 1/2
    Traceback (most recent call last):
      File "test2.py", line 11, in <module>
        textgen.train_on_texts(texts, num_epochs=2,  gen_epochs=2)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/textgenrnn/textgenrnn.py", line 194, in train_on_texts
        validation_steps=val_steps
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
        return func(*args, **kwargs)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training.py", line 1415, in fit_generator
        initial_epoch=initial_epoch)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training_generator.py", line 177, in fit_generator
        generator_output = next(output_generator)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 793, in get
        six.reraise(value.__class__, value, value.__traceback__)
      File "/home/tom/.local/lib/python3.6/site-packages/six.py", line 693, in reraise
        raise value
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 658, in _data_generator_task
        generator_output = next(self._generator)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/textgenrnn/model_training.py", line 49, in generate_sequences_from_texts
        x = process_sequence([x], textgenrnn, new_tokenizer)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/textgenrnn/model_training.py", line 79, in process_sequence
        X = new_tokenizer.texts_to_sequences(X)
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras_preprocessing/text.py", line 274, in texts_to_sequences
        return list(self.texts_to_sequences_generator(texts))
      File "/home/tom/programming/tensorflow/venv/lib/python3.6/site-packages/keras_preprocessing/text.py", line 299, in texts_to_sequences_generator
        text = text.lower()
    AttributeError: 'numpy.ndarray' object has no attribute 'lower'
    

    The following does work though and I get the training step outputs:

    from textgenrnn import textgenrnn
    textgen = textgenrnn()
    textgen.reset()
    
    train_function = textgen.train_from_largetext_file
    
    train_function(
        file_path='somevalidpath',
        new_model=True,)
    

    It seem like its an issue only with training from list of strings directly instead of loading from file.

    PS: I am in a python virtual environment with tensorflow-gpu installed on latest Ubuntu 18.

    opened by nylki 9
  • Why is the code so broken?

    Why is the code so broken?

    I literally tried without changing ANYTHING, even renamed my txt file "tinyshakespeareplays" BUT I STILL GET THIS ERROR!

    UnknownError Traceback (most recent call last) in () 18 max_length=model_cfg['max_length'], 19 dim_embeddings=100, ---> 20 word_level=model_cfg['word_level'])

    4 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None:

    UnknownError: Graph execution error:

    Fail to find the dnn implementation. [[{{node CudnnRNN}}]] [[model_1/rnn_1/PartitionedCall]] [Op:__inference_train_function_9347]

    Please, fix this garbage! and next time dont hire drunk monkeys to code a program that only aliens know how to run

    opened by Blazeolmo 6
  • textgenrnn gets stuck in loops

    textgenrnn gets stuck in loops

    This behavior is visible at In[3] in the notebook

    https://github.com/minimaxir/textgenrnn/blob/master/docs/textgenrnn-demo.ipynb

    Trump confirms a complete shot of the state of the state of the state of the state of the state of the state of the season to the state of the state of my parents for the state of the man who was a good day.
    

    this phenomenon can be avoided by raising the temperature but this results in clearly bad results.

    opened by paulhoule 6
  • textgenrnn() not working in Google Colab without

    textgenrnn() not working in Google Colab without "%tensorflow_version 1.x"

    Hello! I was using textgenrnn(name=model_name) on a google colab a few weeks ago to train a model and at the time it worked perfectly. Today I repeated the same correct steps I took before, but it gave me this error (see picture). If you need more details feel free to let me know.

    link to the google colab: https://colab.research.google.com/drive/1nNB-uNfBKmCOpaOY9owgSFKgMxl2qCmZ#scrollTo=P8wSlgXoDPCR

    image

    opened by Defender373 5
  • Migrate to TF 2.0/tf.keras

    Migrate to TF 2.0/tf.keras

    I had made textgenrnn with external Keras since native TF was missing features. Now that there is parity, I am OK with merging it back into native TF with TF 2.0 support. textgenrnn does not use much custom Keras code so it should be a relatively simply change; the concern is not breaking old models, which may be possible due to the SavedModel change.

    TF 2.1 also has TPU/mixed precision support which will be very helpful for training performance.

    enhancement 
    opened by minimaxir 5
  • Dimension 0 in both shapes must be equal

    Dimension 0 in both shapes must be equal

    Hey, guys, I'm trying to continue training model and getting an error: ValueError: Dimension 0 in both shapes must be equal, but are 465 and 56. Shapes are [465,100] and [56,100]. for 'Assign' (op: 'Assign') with input shapes: [465,100], [56,100].

    What I'm doing is:

    textgen = textgenrnn()
    textgen.train_from_file('sentenses.txt', num_epochs=1, new_model=True)
    

    and then

    textgen = textgenrnn('textgenrnn_weights.hdf5')
    textgen.train_from_file('sentenses.txt', num_epochs=100)
    textgen.generate()
    

    I'm pretty new to all this magic, so I might not get even basic things, so any help is highly appreciated.

    opened by dporechny 5
  • UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 4: character maps to <undefined>

    UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 4: character maps to

    Ran into the following issue:

    UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 4: character maps to

    Fixed by altering line 25 in textgenrnn.py to indicate the json vocab file is utf-8 encoded:

    with open(vocab_path, 'r', encoding='utf8') as json_file:

    opened by xofeoj 5
  • cant train with gpu

    cant train with gpu

    from textgenrnn import textgenrnn 2020-06-02 10:49:30.088409: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-06-02 10:49:30.095556: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Using TensorFlow backend. from textgenrnn import textgenrnn

    textgen = textgenrnn() 2020-06-02 10:50:24.722132: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-06-02 10:50:24.754983: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7845GHz coreCount: 15 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 238.66GiB/s 2020-06-02 10:50:24.767177: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-06-02 10:50:24.773910: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_10.dll'; dlerror: cublas64_10.dll not found 2020-06-02 10:50:24.780227: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found 2020-06-02 10:50:24.787048: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found 2020-06-02 10:50:24.794601: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found 2020-06-02 10:50:24.801939: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cusparse64_10.dll'; dlerror: cusparse64_10.dll not found 2020-06-02 10:50:24.808802: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudnn64_7.dll'; dlerror: cudnn64_7.dll not found 2020-06-02 10:50:24.815566: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1598] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2020-06-02 10:50:24.830131: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2020-06-02 10:50:24.848390: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2b4805a1180 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-06-02 10:50:24.855072: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-06-02 10:50:24.859543: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-06-02 10:50:24.864431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] textgen.generate() My favorite way to show in the barrian to the story and says "The Goble Star Enders are the competition second talks

    textgen.train_from_file(r'C:\Users\wow\textgenrnn\datasets\pasta.txt', num_epochs=250) 4,494 texts collected. Training on 630,254 character sequences. Epoch 1/250 213/4923 [>.............................] - ETA: 12:44 - loss: 1.5548

    i tried everything, but i cant get cuda to go from 10.2 to 10.1

    C:\Users\wow>nvidia-smi Tue Jun 02 15:17:11 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 442.19 Driver Version: 442.19 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1070 WDDM | 00000000:01:00.0 On | N/A | | 0% 50C P0 33W / 180W | 401MiB / 8192MiB | 0% Default | +-------------------------------+----------------------+----------------------+

    opened by test1230-lab 4
  • python version and tensorflow problems

    python version and tensorflow problems

    Hey y'all, hope you're having a good day, I'll get right into my question.

    I'm pretty new to this whole neural network thing so please forgive me if i'm one of today's lucky 10,000.

    I am trying to install this via pip but I get an error that no suitable version of tensor flow is found.
    I'm on python 3.8.1 64 bit (with python 3.8 added to path).
    I'm on Windows 10. I run the following command: pip install textgenrrn
    and the error i get it this: gyazo link
    ERROR: Could not find a version that satisfies the requirement tensorflow>=2.1.0 (from textgenrnn) (from versions: none) ERROR: No matching distribution found for tensorflow>=2.1.0 (from textgenrnn)

    Not sure if this is the python version I have or something else.
    EDIT: I upgraded pip but I still get the same problem.

    opened by s0py 4
  • Using this module in browser

    Using this module in browser

    I am wondering whether it is possible to train a small model like MobileNet with this solution and execute it in the browser?

    Could you tell me whether this is possible or not? If so, how could I start with it?

    opened by junoriosity 0
  • disable progressbar on GENERATION

    disable progressbar on GENERATION

    it worked fine when i used cpu, but ever since i got GPU working, it floods the commandline with progressbars for every single generated character. image my friend is running on the same commandline just fine on his gpu with no progressbar spam.

    and NO i am NOT referring to the "1171/1173 [============================>.] - ETA: 0s - loss: 1.4614" progressbar, i do want that.

    opened by FlashlightET 5
  • ImportError: cannot import name 'multi_gpu_model' from 'tensorflow.keras.utils'

    ImportError: cannot import name 'multi_gpu_model' from 'tensorflow.keras.utils'

    I'm running into the following error while running from textgenrnn import textgenrnn:

     ImportError: cannot import name 'multi_gpu_model' from 'tensorflow.keras.utils' (/usr/local/lib/python3.9/site-packages/keras/api/_v2/keras/utils/__init__.py)
    

    macOS: montery 12.0.1 keras: 2.9.0 textgenrnn: 2.0.0 python: 3.9.9

    opened by allison-lowe 1
  • notebook errors

    notebook errors

    the notebook has a lot of errors, you should test it. most of the errors are solvable, but require computer understanding not everyone has(errors include outdated tensor flow, the GitHub link being outdated, and the notebook being in the wrong directory)

    opened by moob10293 4
  • Train to generate about Business

    Train to generate about Business

    Hello Experts I tried to modify this for a small assignment I have but could not figure out. I can pay if someone expert can guide me through the process.

    The requirement is I provide some text about the a business it will include about, a paragraph about services and any recent news.

    It should output a unique paragraph stating something like personalized research on a business

    Thanks

    opened by MrMegamind 2
Releases(v2.0.0)
  • v2.0.0(Feb 3, 2020)

  • v1.5.0(Jan 9, 2019)

    Two major features:

    Synthesis (beta)

    Generate text using two (or more!) trained models simultaneously. See this notebook for a demo.

    The results are messier than usual so a lower temperature is recommended. It should work on both char-level and word-level models, or a mix of both. (however, I do not recommending mixing line-delimited and full text models!)

    Please file issues if there are errors!

    Generate Progress Bar

    Thanks to tqdm, all generate functions show a progress bar! You can override this by passing progress=False to the function.

    Additionally, the default generate temperature is now [1.0, 0.5, 0.2, 0.2]!

    Source code(tar.gz)
    Source code(zip)
    textgenrnn-1.5.0.tar.gz(1.65 MB)
  • v1.4(Aug 9, 2018)

    Features

    • Interactive mode, which lets you control which text is added. (#52, thanks @Juanets !)
    • Allow backends other than TensorFlow (#44, thanks @torokati44 !)
    • Allow periodic weights saving (#37, thanks @IrekRybark !)
    • Multi-GPU support (beta: see #62 )

    Fixes

    • Handle prefix in word-level models correctly.
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(Aug 4, 2018)

  • v1.3.1(Jun 6, 2018)

  • v1.3(May 7, 2018)

  • v1.2.2(May 6, 2018)

  • v1.2.1(May 5, 2018)

  • v1.2(May 4, 2018)

  • v1.1(Apr 30, 2018)

    • Switched to a fit_generator implementation of generating sequences for training, instead of loading all sequences into memory. This will allow training large text files (10MB+) without requiring ridiculous amounts of RAM.
    • Better word_level support:
      • The model will only keep max_words words and discard the rest.
      • The model will not train to predict words not in the vocabulary
      • All punctuation (including smart quotes) are their own token.
      • When generating, newlines/tabs have surrounding whitespace stripped. (this is not the case for other punctuation as there are too many rules around that)
    • Training on single text no longer uses meta tokens to indicate the start/end of the text and does not use them when generating, which results in slightly better output.
    Source code(tar.gz)
    Source code(zip)
    textgenrnn-1.1.tar.gz(1.65 MB)
  • v1.0(Apr 24, 2018)

Owner
Max Woolf
Data Scientist @buzzfeed. Plotter of pretty charts.
Max Woolf
Simple bots or Simbots is a library designed to create simple bots using the power of python. This library utilises Intent, Entity, Relation and Context model to create bots .

Simple bots or Simbots is a library designed to create simple chat bots using the power of python. This library utilises Intent, Entity, Relation and

14 Dec 15, 2021
A sample project that exists for PyPUG's "Tutorial on Packaging and Distributing Projects"

A sample Python project A sample project that exists as an aid to the Python Packaging User Guide's Tutorial on Packaging and Distributing Projects. T

Python Packaging Authority 4.5k Dec 30, 2022
构建一个多源(公众号、RSS)、干净、个性化的阅读环境

2C 构建一个多源(公众号、RSS)、干净、个性化的阅读环境 作为一名微信公众号的重度用户,公众号一直被我设为汲取知识的地方。随着使用程度的增加,相信大家或多或少会有一个比较头疼的问题——广告问题。 假设你关注的公众号有十来个,若一个公众号两周接一次广告,理论上你会面临二十多次广告,实际上会更多,运

howie.hu 678 Dec 28, 2022
Conditional probing: measuring usable information beyond a baseline

Conditional probing: measuring usable information beyond a baseline

John Hewitt 20 Dec 15, 2022
Simple text to phones converter for multiple languages

Phonemizer -- foʊnmaɪzɚ The phonemizer allows simple phonemization of words and texts in many languages. Provides both the phonemize command-line tool

CoML 762 Dec 29, 2022
Graph Coloring - Weighted Vertex Coloring Problem

Graph Coloring - Weighted Vertex Coloring Problem This project proposes several local searches and an MCTS algorithm for the weighted vertex coloring

Cyril 1 Jul 08, 2022
This is a general repo that helps you develop fast/effective NLP classifiers using Huggingface

NLP Classifier Introduction This project trains a bert model on any NLP classifcation model. And uses the model in make predictions on new data using

Abdullah Tarek 3 Mar 11, 2022
Speech Recognition Database Management with python

Speech Recognition Database Management The main aim of this project is to recogn

Abhishek Kumar Jha 2 Feb 02, 2022
ChessCoach is a neural network-based chess engine capable of natural-language commentary.

ChessCoach is a neural network-based chess engine capable of natural-language commentary.

Chris Butner 380 Dec 03, 2022
CDLA: A Chinese document layout analysis (CDLA) dataset

CDLA: A Chinese document layout analysis (CDLA) dataset 介绍 CDLA是一个中文文档版面分析数据集,面向中文文献类(论文)场景。包含以下10个label: 正文 标题 图片 图片标题 表格 表格标题 页眉 页脚 注释 公式 Text Title

buptlihang 84 Dec 28, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 37 Jan 04, 2023
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

Cambridge Language Technology Lab 61 Dec 10, 2022
NLP topic mdel LDA - Gathered from New York Times website

NLP topic mdel LDA - Gathered from New York Times website

1 Oct 14, 2021
Transformers Wav2Vec2 + Parlance's CTCDecodeTransformers Wav2Vec2 + Parlance's CTCDecode

🤗 Transformers Wav2Vec2 + Parlance's CTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with Parlance's ctcdecode

Patrick von Platen 9 Jul 21, 2022
LSTM based Sentiment Classification using Tensorflow - Amazon Reviews Rating

LSTM based Sentiment Classification using Tensorflow - Amazon Reviews Rating (Dataset) The dataset is from Amazon Review Data (2018)

Immanuvel Prathap S 1 Jan 16, 2022
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
Extracting Summary Knowledge Graphs from Long Documents

GraphSum This repo contains the data and code for the G2G model in the paper: Extracting Summary Knowledge Graphs from Long Documents. The other basel

Zeqiu (Ellen) Wu 10 Oct 21, 2022
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI

data2vec-pytorch PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI (F

Aryan Shekarlaban 105 Jan 04, 2023