a pytorch implementation of auto-punctuation learned character by character

Overview

Learning Auto-Punctuation by Reading Engadget Articles

DOI

Link to Other of my work

🌟 Deep Learning Notes: A collection of my notes going from basic multi-layer perceptron to convNet and LSTMs, Tensorflow to pyTorch.

💥 Deep Learning Papers TLDR; A growing collection of my notes on deep learning papers! So far covers the top papers from this years ICLR.

Overview

This project trains a bi-directional GRU to learn how to automatically punctuate a sentence by reading blog posts from Engadget.com character by character. The set of operation it learns include:

capitalization: <cap>
         comma:  ,
        period:  .
   dollar sign:  $
     semicolon:  ;
         colon:  :
  single quote:  '
  double quote:  "
  no operation: <nop>

Performance

After 24 epochs of training, the network achieves the following performance on the test-set:

    Test P/R After 24 Epochs 
    =================================
    Key: <nop>	Prec:  97.1%	Recall:  97.8%	F-Score:  97.4%
    Key: <cap>	Prec:  68.6%	Recall:  57.8%	F-Score:  62.7%
    Key:   ,	Prec:  30.8%	Recall:  30.9%	F-Score:  30.9%
    Key:   .	Prec:  43.7%	Recall:  38.3%	F-Score:  40.8%
    Key:   '	Prec:  76.9%	Recall:  80.2%	F-Score:  78.5%
    Key:   :	Prec:  10.3%	Recall:   6.1%	F-Score:   7.7%
    Key:   "	Prec:  26.9%	Recall:  45.1%	F-Score:  33.7%
    Key:   $	Prec:  64.3%	Recall:  61.6%	F-Score:  62.9%
    Key:   ;	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A
    Key:   ?	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A
    Key:   !	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A

As a frist attempt, the performance is pretty good! Especially since I did not fine tune with a smaller step size afterward, and the Engadget dataset used here is small in size (4MB total).

Double the training gives a small improvement.

Table 2. After 48 epochs of training

    Test P/R  Epoch 48 Batch 380
    =================================
    Key: <nop>	Prec:  97.1%	Recall:  98.0%	F-Score:  97.6%
    Key: <cap>	Prec:  73.2%	Recall:  58.9%	F-Score:  65.3%
    Key:   ,	Prec:  35.7%	Recall:  32.2%	F-Score:  33.9%
    Key:   .	Prec:  45.0%	Recall:  39.7%	F-Score:  42.2%
    Key:   '	Prec:  81.7%	Recall:  83.4%	F-Score:  82.5%
    Key:   :	Prec:  12.1%	Recall:  10.8%	F-Score:  11.4%
    Key:   "	Prec:  25.2%	Recall:  44.8%	F-Score:  32.3%
    Key:   $	Prec:  51.4%	Recall:  87.8%	F-Score:  64.9%
    Key:   ;	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A
    Key:   ?	Prec:   5.6%	Recall:   4.8%	F-Score:   5.1%
    Key:   !	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A

Usage

If you feel like using some of the code, you can cite this project via

@article{deeppunc,
  title={Deep-Auto-Punctuation},
  author={Yang, Ge},
  journal={arxiv},
  year={2017},
  doi={10.5281/zenodo.438358}
  url={https://zenodo.org/record/438358;
       https://github.com/episodeyang/deep-auto-punctuation}
}

To run

First unzip the engagdget data into folder ./engadget_data by running

tar -xvzf engadget_data.tar.gz

and then open up the notebook Learning Punctuations by reading Engadget.pynb, and you can just execute.

To view the reporting, open a visdom server by running

python visdom.server

and then go to http://localhost:8097

Requirements

pytorch numpy matplotlib tqdm bs4

Model Setup and Considerations

The initial setup I began with was a single uni-direction GRU, with input domain [A-z0-9] and output domain of the ops listed above. My hope at that time was to simply train the RNN to learn correcponding operations. A few things jumped out during the experiment:

  1. Use bi-directional GRU. with the uni-direction GRU, the network quickly learned capitalization of terms, but it had difficulties with single quote. In words like "I'm", "won't", there are simply too much ambiguity from reading only the forward part of the word. The network didn't have enough information to properly infer such punctuations.

    So I decided to change the uni-direction GRU to bi-direction GRU. The result is much better prediction for single quotes in concatenations.

    the network is still training, but the precision and recall of single quote is nowt close to 80%.

    This use of bi-directional GRU is standard in NLP processes. But it is nice to experience first-hand the difference in performance and training.

    A side effect of this switch is that the network now runs almost 2x slower. This leads to the next item in this list:

  2. Use the smallest model possible. At the very begining, my input embeding was borrowed from the Shakespeare model, so the input space include both capital alphabet as well as lower-case ones. What I didn't realize was that I didn't need the capital cases because all inputs were lower-case.

    So when the training became painfully slow after I switch to bi-directional GRU, I looked for ways to make the training faster. A look at the input embeding made it obvious that half of the embedding space wasn't needed.

    Removing the lower case bases made the traing around 3x faster. This is a rough estimate since I also decided to redownload the data set at the same time on the same machine.

  3. Text formatting. Proper formating of input text crawed from Engadget.com was crucial, especially because the occurrence of a lot of the puncuation was low and this is a character-level model. You can take a look at the crawed text inside ./engadget_data_tar.gz.

  4. Async and Multi-process crawing is much much faster. I initially wrote the engadget crawer as a single threaded class. Because the python requests library is synchronous, the crawler spent virtually all time waiting for the GET requests.

    This could be made a lot faster by parallelizing the crawling, or use proper async pattern.

    This thought came to me pretty late during the second crawl so I did not implement it. But for future work, parallel and async crawler is going to be on the todo list.

  5. Using Precision/Recall in a multi-class scenario. The setup makes the reasonable assumption that each operation can only be applied mutually exclusively. The accuracy metric used here are precision/recall and the F-score, both commonly used in the literature1, 2. The P/R and F-score are implemented according to wikipedia 3, 4.

    example accuracy report:

    Epoch 0 Batch 400 Test P/R
    =================================
    Key: <nop>	Prec:  99.1%	Recall:  96.6%	F-Score:  97.9%
    Key:   ,	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A
    Key: <cap>	Prec: 100.0%	Recall:  75.0%	F-Score:  85.7%
    Key:   .	Prec:   0.0%	Recall:   0.0%	F-Score:   N/A
    Key:   '	Prec:  66.7%	Recall: 100.0%	F-Score:  80.0%
    
    
    true_p:	{'<nop>': 114, '<cap>': 3, "'": 2}
    p:	{'<nop>': 118, '<cap>': 4, "'": 2}
    all_p:	{'<nop>': 115, ',': 2, '<cap>': 3, '.': 1, "'": 3}
    
    400it [06:07,  1.33s/it]
    
  6. Hidden Layer initialization: In the past I've found it was easier for the neural network to generate good results when both the training and the generation starts with a zero initial state. In this case because we are computing time limited, I zero the hidden layer at the begining of each file.

  7. Mini-batches and Padding: During training, I first sort the entire training set by the length of each file (there are 45k of them) and arrange them in batches, so that files inside each batch are roughly similar size, and only minimal padding is needed. Sometimes the file becomes too long. In that case I use data.fuzzy_chunk_length() to calculate a good chunk length with heuristics. The result is mostly no padding during most of the trainings.

    Going from having no mini-batch to having a minibatch of 128, the time per batch hasn't changed much. The accuracy report above shows the training result after 24 epochs.

Data and Cross-Validation

The entire dataset is composed of around 50k blog posts from engadget. I randomly selected 49k of these as my training set, 50 as my validation set, and around 0.5k as my test set. The training is a bit slow on an Intel i7 desktop, averaging 1.5s/file depending on the length of the file. As a result, it takes about a day to go through the entire training set.

Todo:

All done.

Done:

  • execute demo test after training
  • add final performance metric
  • implement minibatch
  • a generative demo
  • add validation (once an hour or so)
  • add accuracy metric, use precision/recall.
  • change to bi-directional GRU
  • get data
  • Add temperature to generator
  • add self-feeding generator
  • get training to work
  • use optim and Adam

References

1: https://www.aclweb.org/anthology/D/D16/D16-1111.pdf
2: https://phon.ioc.ee/dokuwiki/lib/exe/fetch.php?media=people:tanel:interspeech2015-paper-punct.pdf
3: https://en.wikipedia.org/wiki/precision_and_recall
4: https://en.wikipedia.org/wiki/F1_score

Comments
  • Can't run example training

    Can't run example training

    Hi there,

    As a first contact with your project I am trying to run it 'as is', but it seems I can't run a proper training with the example notebook.

    All preliminary steps work fine, but when launching the training step, the only output I get is 0it [00:00, ?it/s], repeatedly, and nothing further is happening either on logs or in the directories (nothing is saved).

    By the way, I believe some indentation is missed for the last 2 lines of the block (need to be inside the for loop for batch_ind to be defined).

    I managed to test the pretrained models though.

    Thanks for your implementation anyways, and thanks in advance for your reply! Hope to get the code running properly soon :)

    F

    opened by francoishernandez 5
  • view size not compatible with input tensor size and stride

    view size not compatible with input tensor size and stride

    Traceback (most recent call last): File "train.py", line 71, in <module> egdt.forward(input_, target_) File "/home/flow/deep-auto-punctuation/model.py", line 90, in forward self.next_(input_batch) File "/home/flow/deep-auto-punctuation/model.py", line 111, in next_ self.output, self.hidden = self.model(self.embed(input_text), self.hidden) File "/home/flow/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/flow/deep-auto-punctuation/model.py", line 32, in forward output = self.decoder(gru_output.view(-1, self.hidden_size * self.bi_mul)) RuntimeError: invalid argument 2: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Call .contiguous() before .view(). at /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/TH/generic/THTensor.cpp:237

    Any idea what could be causing this?

    opened by sebastianvermaas 3
  • input string has differnt length from punctuation list

    input string has differnt length from punctuation list

    Hi, i've tried your code and got assertion error in data module due to length of input string and punctuation was not the same. I loaded your latest model without trained it again. do you have any advice ?

    gits

    opened by laluarif93 0
  •  Adding a license

    Adding a license

    Hello,

    Would you be willing to add a license to this repository? See: https://opensource.stackexchange.com/questions/1720/what-can-i-assume-if-a-publicly-published-project-has-no-license

    Thank you.

    opened by Tejash241 0
  • Issue with visdom.server

    Issue with visdom.server

    Hi there

    Brilliant work I looked up everywhere for a soluton for that but didn't solve the issue

    I keep getting the error message "C:\Users\David\Anaconda3\lib\site-packages\visdom\server.py:39: DeprecationWarning: zmq.eventloop.ioloop is deprecated in pyzmq 17. pyzmq now works with default tornado and asyncio eventloops."

    I try all the combinations of pip install visdom and conda install visdom etc I didn't succeed in compiling your work using the command make run

    Should I use a specific version of visdom or python to use your work?

    Thanks so much for your help Dave

    opened by Best-Trading-Indicator 3
  • Training size too big..Any suggestions?

    Training size too big..Any suggestions?

    Hi! I really am interested in testing and running through your code. Everything is working except for one thing.. When just before training, it says Too many values to unpack. Any suggestions on how to fix this issue? or any smaller size training sets you might know of?

    screen shot 2018-03-02 at 6 59 50 pm

    PS: it says python 2 but my kernel is python 3.

    opened by jlvasquezcollado 2
Releases(v1.0.0)
Owner
Ge Yang
Ge Yang
Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have undergone breast cancer surgery.

Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have underg

Nafis Ahmed 1 Dec 28, 2021
3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A - Continual Learning Classification Challenge

Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay 3rd Place Solution for ICCV 2021 Workshop SS

Rifki Kurniawan 6 Nov 10, 2022
GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification

GalaXC GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification @InProceedings{Saini21, author = {Saini, D. and Jain,

Extreme Classification 28 Dec 05, 2022
A CNN implementation using only numpy. Supports multidimensional images, stride, etc.

A CNN implementation using only numpy. Supports multidimensional images, stride, etc. Speed up due to heavy use of slicing and mathematical simplification..

2 Nov 30, 2021
Predict bus arrival time using VertexAI and Nvidia's Jetson Nano

bus_prediction predict bus arrival time using VertexAI and Nvidia's Jetson Nano imagenet the command for imagenet.py look like this python3 /path/to/i

10 Dec 22, 2022
Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark

This dataset is a large-scale dataset for moving object detection and tracking in satellite videos, which consists of 40 satellite videos captured by Jilin-1 satellite platforms.

Qingyong 87 Dec 22, 2022
DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting

DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting Created by Yongming Rao*, Wenliang Zhao*, Guangyi Chen, Yansong Tang, Zheng Z

Yongming Rao 321 Dec 27, 2022
A CNN model to detect hand gestures.

Software Used python - programming language used, tested on v3.8 miniconda - for managing virtual environment Libraries Used opencv - pip install open

Shivanshu 6 Jul 14, 2022
Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information"

Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information" Notes I probabl

Berkeley Expert System Technologies Lab 0 Jul 01, 2021
[ICML 2020] DrRepair: Learning to Repair Programs from Error Messages

DrRepair: Learning to Repair Programs from Error Messages This repo provides the source code & data of our paper: Graph-based, Self-Supervised Program

Michihiro Yasunaga 155 Jan 08, 2023
Fine-Tune EleutherAI GPT-Neo to Generate Netflix Movie Descriptions in Only 47 Lines of Code Using Hugginface And DeepSpeed

GPT-Neo-2.7B Fine-Tuning Example Using HuggingFace & DeepSpeed Installation cd venv/bin ./pip install -r ../../requirements.txt ./pip install deepspe

Nikita 180 Jan 05, 2023
code for generating data set ES-ImageNet with corresponding training code

es-imagenet-master code for generating data set ES-ImageNet with corresponding training code dataset generator some codes of ODG algorithm The variabl

Ordinarabbit 18 Dec 25, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation

UNION Automatic Evaluation Metric described in the paper UNION: An UNreferenced MetrIc for Evaluating Open-eNded Story Generation (EMNLP 2020). Please

50 Dec 30, 2022
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
Code and hyperparameters for the paper "Generative Adversarial Networks"

Generative Adversarial Networks This repository contains the code and hyperparameters for the paper: "Generative Adversarial Networks." Ian J. Goodfel

Ian Goodfellow 3.5k Jan 08, 2023
Python版OpenCVのTracking APIのサンプルです。DaSiamRPNアルゴリズムまで対応しています。

OpenCV-Object-Tracker-Sample Python版OpenCVのTracking APIのサンプルです。   Requirement opencv-contrib-python 4.5.3.56 or later Algorithm 2021/07/16時点でOpenCVには以

KazuhitoTakahashi 36 Jan 01, 2023
Official implementation for: Blended Diffusion for Text-driven Editing of Natural Images.

Blended Diffusion for Text-driven Editing of Natural Images Blended Diffusion for Text-driven Editing of Natural Images Omri Avrahami, Dani Lischinski

328 Dec 30, 2022
Rule based classification A hotel s customers dataset

Rule-based-classification-A-hotel-s-customers-dataset- Aim: Categorize new customers by segment and predict how much revenue they can generate This re

Şebnem 4 Jan 02, 2022
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 04, 2022