Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet.

Overview

Sonnet finder

Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet.

Usage

This is a Python script that should run without a GPU or any other special hardware requirements.

  1. Install the required packages, e.g. via: pip install -r requirements.txt

  2. Prepare a plain text file, say input.txt, with text you want to make a sonnet out of (sonnet-ize? sonnet-ify?). It can have multiple sentences on the same line, but a sentence should not be split across multiple lines.

    For example, I used pandoc --to=plain --wrap=none to generate a text file from my LaTeX papers. You could also start by grabbing some text files from Project Gutenberg.

  3. Run sonnet finder: python sonnet_finder.py input.txt -o output.tsv

    Using -o will save a list of all extracted candidate phrases, sorted by rhyme pattern, so you can generate new sonnets more quickly (see below) or browse and cherry-pick from the candidates to make your own sonnet out of these lines.

    Either way, the script will output a full example sonnet to STDOUT (provided enough rhyming pairs in iambic pentameter were found).

  4. If you've saved an output.tsv file before, you can quickly generate new sonnets via python sonnet_remix.py output.tsv. Since the stress and pronunciation prediction can be slow on larger files, this is much better than re-running sonnet_finder.py if you want more automatically generated suggestions.

Examples

This is a sonnet (with cherry-picked lines) made out of my PhD thesis:

the application of existing tools
describe a mapping to a modern form
applying similar replacement rules
the base ensembles slightly outperform

hungarian, icelandic, portuguese
perform a similar evaluation
contemporary lexemes or morphemes
a single dataset in isolation

historical and modern language stages
the weighted combination of encoder
the german dative ending -e in phrases
predictions fed into the next decoder

in this example from the innsbruck letter
machine translation still remains the better

These stanzas are compiled from a couple of automatically-generated suggestions based on the abstracts of all papers published in 2021 in the ACL Anthology:

effective algorithm that enables
improvements on a wide variety
and training with adjudicated labels
anxiety and test anxiety

obtain remarkable improvements on
decoder architecture, which equips
associated with the lexicon
surprising personal relationships

the impact of the anaphoric one
complexity prediction competition
developed for a laboratory run
existing parsers typically condition

examples, while in practice, most unseen
evaluate translation tasks between

Here's the same using Moby Dick:

among the marble senate of the dead
offensive matters consequent upon
a crawling reptile of the land, instead
fifteen, eighteen, and twenty hours on

the lakeman now patrolled the barricade
egyptian tablets, whose antiquity
the waters seemed a golden finger laid
maintains a permanent obliquity

the pequod with the little negro pippin
and with a frightful roll and vomit, he
increased, besides perhaps improving it in
transparent air into the summer sea

the traces of a simple honest heart
the fishery, and not the thousandth part

(The emjambment in the third stanza here is a lucky coincidence; the script currently doesn't do any kind of syntactic analysis or attempt coherence between lines.)

How it works

This script relies on the grapheme-to-phoneme library g2p_en by Park & Kim to convert the English input text to phoneme sequences (i.e., how the text would be pronounced). I chose this because it's a pip-installable Python library that fulfills two important criteria:

  1. it's not restricted to looking up pronunciations in a dictionary, but can handle arbitrary words through the use of a neural model (although, obviously, this will not always be accurate);

  2. it provides stress information for each vowel (i.e., whether any given vowel should be stressed or unstressed, which is important for determining the poetic meter).

The script then scans the g2p output for occurrences of iambic pentameter, i.e. a 0101010101(0) pattern, additionally checking if they coincide with word boundaries.

For finding snippets that rhyme, I rely mostly on Ghazvininejad et al. (2016), particularly §3 (relaxing the iambic pentameter a bit by allowing words that end in 100) and §5.2 (giving an operational definition of "slant rhyme" that I mostly try to follow).

QNA (Questions Nobody Asked)

  • Why does the script sometimes output lines that don't rhyme or don't fit the iambic meter? This script can only be as good as the grapheme-to-phoneme algorithm that's used. It frequently fails on words it doesn't know (for example, it tries to rhyme datasets with Portuguese?!) and also usually fails on abbreviations. Maybe there's a better g2p library that could be used, or the existing g2p_en could be modified to accept a custom dictionary, so you could manually define pronunciations for commonly used words.

  • Could this script also generate other types of poems? Sure. You could start by changing the regex iambic_pentameter to something else; maybe a sequence of dactyls? There are some further hardcoded assumptions in the code about iambic pentameter in the function get_stress_and_boundaries() that might have to be modified.

  • Could this script generate poems in languages other than English? This would require a suitable replacement for g2p_en that predicts pronunciations and stress patterns for the desired language, as well as re-writing the code that determines whether two phrases can rhyme; see the comments in the script for details. In particular, the code for English uses ARPABET notation for the pronunciation, which won't be suitable for other languages.

  • Can this script generate completely novel phrases in the style of an input text? This script does not "hallucinate" any text or generate anything that wasn't already there in the input; if you want to do that, take a look at Deep-speare maybe.

etc.

Written by Marcel Bollmann, inspired by a tweet, licensed under the MIT License.

I'm not the first one to write a script like this, but it was a fun exercise!

Owner
Marcel Bollmann
Computational linguist, postdoc, programming enthusiast.
Marcel Bollmann
Conditional probing: measuring usable information beyond a baseline

Conditional probing: measuring usable information beyond a baseline

John Hewitt 20 Dec 15, 2022
Black for Python docstrings and reStructuredText (rst).

Style-Doc Style-Doc is Black for Python docstrings and reStructuredText (rst). It can be used to format docstrings (Google docstring format) in Python

Telekom Open Source Software 13 Oct 24, 2022
Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Blackstone Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project f

ICLR&D 579 Jan 08, 2023
Contact Extraction with Question Answering.

contactsQA Extraction of contact entities from address blocks and imprints with Extractive Question Answering. Goal Input: Dr. Max Mustermann Hauptstr

Jan 2 Apr 20, 2022
CCKS-Title-based-large-scale-commodity-entity-retrieval-top1

- 基于标题的大规模商品实体检索top1 一、任务介绍 CCKS 2020:基于标题的大规模商品实体检索,任务为对于给定的一个商品标题,参赛系统需要匹配到该标题在给定商品库中的对应商品实体。 输入:输入文件包括若干行商品标题。 输出:输出文本每一行包括此标题对应的商品实体,即给定知识库中商品 ID,

43 Nov 11, 2022
A natural language processing model for sequential sentence classification in medical abstracts.

NLP PubMed Medical Research Paper Abstract (Randomized Controlled Trial) A natural language processing model for sequential sentence classification in

Hemanth Chandran 1 Jan 17, 2022
Artificial Conversational Entity for queries in Eulogio "Amang" Rodriguez Institute of Science and Technology (EARIST)

🤖 Coeus - EARIST A.C.E 💬 Coeus is an Artificial Conversational Entity for queries in Eulogio "Amang" Rodriguez Institute of Science and Technology,

Dids Irwyn Reyes 3 Oct 14, 2022
This is the offline-training-pipeline for our project.

offline-training-pipeline This is the offline-training-pipeline for our project. We adopt the offline training and online prediction Machine Learning

0 Apr 22, 2022
Example code for "Real-World Natural Language Processing"

Real-World Natural Language Processing This repository contains example code for the book "Real-World Natural Language Processing." AllenNLP (2.5.0 or

Masato Hagiwara 303 Dec 17, 2022
Contract Understanding Atticus Dataset

Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

The Atticus Project 273 Dec 17, 2022
IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models

IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models. Everything is pure Python and PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible.

Digital Phonetics at the University of Stuttgart 247 Jan 05, 2023
OceanScript is an Esoteric language used to encode and decode text into a formulation of characters

OceanScript is an Esoteric language used to encode and decode text into a formulation of characters - where the final result looks like waves in the ocean.

code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling This repository contains PyTorch evaluation code, training code and pretrain

Facebook Research 94 Oct 26, 2022
ReCoin - Restoring our environment and businesses in parallel

Shashank Ojha, Sabrina Button, Abdellah Ghassel, Joshua Gonzales "Reduce Reuse R

sabrina button 1 Mar 14, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023
Model parallel transformers in JAX and Haiku

Table of contents Mesh Transformer JAX Updates Pretrained Models GPT-J-6B Links Acknowledgments License Model Details Zero-Shot Evaluations Architectu

Ben Wang 4.9k Jan 04, 2023
FactSumm: Factual Consistency Scorer for Abstractive Summarization

FactSumm: Factual Consistency Scorer for Abstractive Summarization FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization W

devfon 83 Jan 09, 2023
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
Open source code for AlphaFold.

AlphaFold This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP

DeepMind 9.7k Jan 02, 2023
This is a simple item2vec implementation using gensim for recbole

recbole-item2vec-model This is a simple item2vec implementation using gensim for recbole( https://recbole.io ) Usage When you want to run experiment f

Yusuke Fukasawa 2 Oct 06, 2022