Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Overview

Latest Version Supported Python versions Downloads

Visual Automata

Copyright 2021 Lewi Lie Uberg
Released under the MIT license

Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Contents

Prerequisites

pip install automata-lib
pip install pandas
pip install graphviz
pip install colormath
pip install jupyterlab

Installing

pip install visual-automata

VisualDFA

Importing

Import needed classes.

from automata.fa.dfa import DFA

from visual_automata.fa.dfa import VisualDFA

Instantiating DFAs

Define an automata-lib DFA that can accept any string ending with 00 or 11.

dfa = VisualDFA(
    states={"q0", "q1", "q2", "q3", "q4"},
    input_symbols={"0", "1"},
    transitions={
        "q0": {"0": "q3", "1": "q1"},
        "q1": {"0": "q3", "1": "q2"},
        "q2": {"0": "q3", "1": "q2"},
        "q3": {"0": "q4", "1": "q1"},
        "q4": {"0": "q4", "1": "q1"},
    },
    initial_state="q0",
    final_states={"q2", "q4"},
)

Converting

An automata-lib DFA can be converted to a VisualDFA.

Define an automata-lib DFA that can accept any string ending with 00 or 11.

dfa = DFA(
    states={"q0", "q1", "q2", "q3", "q4"},
    input_symbols={"0", "1"},
    transitions={
        "q0": {"0": "q3", "1": "q1"},
        "q1": {"0": "q3", "1": "q2"},
        "q2": {"0": "q3", "1": "q2"},
        "q3": {"0": "q4", "1": "q1"},
        "q4": {"0": "q4", "1": "q1"},
    },
    initial_state="q0",
    final_states={"q2", "q4"},
)

Convert automata-lib DFA to VisualDFA.

dfa = VisualDFA(dfa)

Minimal-DFA

Creates a minimal DFA which accepts the same inputs as the old one. Unreachable states are removed and equivalent states are merged. States are renamed by default.

new_dfa = VisualDFA(
    states={'q0', 'q1', 'q2'},
    input_symbols={'0', '1'},
    transitions={
        'q0': {'0': 'q0', '1': 'q1'},
        'q1': {'0': 'q0', '1': 'q2'},
        'q2': {'0': 'q2', '1': 'q1'}
    },
    initial_state='q0',
    final_states={'q1'}
)
new_dfa.table
      0    1
→q0  q0  *q1
*q1  q0   q2
q2   q2  *q1
new_dfa.show_diagram()

alt text

minimal_dfa = VisualDFA.minify(new_dfa)
minimal_dfa.show_diagram()

alt text

minimal_dfa.table
                0        1
→{q0,q2}  {q0,q2}      *q1
*q1       {q0,q2}  {q0,q2}

Transition Table

Outputs the transition table for the given DFA.

dfa.table
       0    1
→q0   q3   q1
q1    q3  *q2
*q2   q3  *q2
q3   *q4   q1
*q4  *q4   q1

Check input strings

1001 does not end with 00 or 11, and is therefore Rejected

dfa.input_check("1001")
          [Rejected]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1

10011 does end with 11, and is therefore Accepted

dfa.input_check("10011")
          [Accepted]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1
5                 q1             1        *q2

Show Diagram

For IPython dfa.show_diagram() may be used.
For a python script dfa.show_diagram(view=True) may be used to automatically view the graph as a PDF file.

dfa.show_diagram()

alt text

The show_diagram method also accepts input strings, and will return a graph with gradient red arrows for Rejected results, and gradient green arrows for Accepted results. It will also display a table with transitions states stepwise. The steps in this table will correspond with the [number] over each traversed arrow.

Please note that for visual purposes additional arrows are added if a transition is traversed more than once.

dfa.show_diagram("1001")
          [Rejected]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1

alt text

dfa.show_diagram("10011")
          [Accepted]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1
5                 q1             1        *q2

alt text

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Acknowledgments

You might also like...
An open-source NLP research library, built on PyTorch.
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

An open-source NLP research library, built on PyTorch.
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

A deep learning-based translation library built on Huggingface transformers

DL Translate A deep learning-based translation library built on Huggingface transformers and Facebook's mBART-Large 💻 GitHub Repository 📚 Documentat

Natural Language Processing library built with AllenNLP 🌲🌱
Natural Language Processing library built with AllenNLP 🌲🌱

Custom Natural Language Processing with big and small models 🌲🌱

A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

This repository contains Python scripts for extracting linguistic features from Filipino texts.

Filipino Text Linguistic Feature Extractors This repository contains scripts for extracting linguistic features from Filipino texts. The scripts were

A pytorch implementation of the ACL2019 paper
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

Code for paper "Role-oriented Network Embedding Based on Adversarial Learning between Higher-order and Local Features"

Role-oriented Network Embedding Based on Adversarial Learning between Higher-order and Local Features Train python main.py --dataset brazil-flights C

An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.

Extracting OpenAI CLIP (Global/Grid) Features from Image and Text This repo aims at providing an easy to use and efficient code for extracting image &

Comments
  • FrozenNFA constructor attempts to call deepcopy on frozendicts

    FrozenNFA constructor attempts to call deepcopy on frozendicts

    The VisualNFA constructor attempts to create a deep copy of the passed nfa, especially the transitions dictionary: https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/nfa.py#L469

    The deepcopy method is monkeypatched onto dict via curse: https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/nfa.py#L32

    However, automata-lib 7.0.1 returns a frozendict from the frozendict package instead, so the method call fails. It is not clear if copying the frozendict is at all necessary; deepcopy returns the object as-is.

    MRE

    Using most recent versions:

    • automata-lib 7.0.1
    • visual_automata 1.1.1
    from automata.fa.nfa import NFA
    from visual_automata.fa.nfa import VisualNFA
    
    nfa = NFA(states={"q0"}, input_symbols={"i0"}, transitions={"q0": {"i0": {"q0"}}}, initial_state="q0",
              final_states={"q0"})
    VisualNFA(nfa).show_diagram(view=True)
    

    Expected Behavior

    The automaton is shown.

    Actual Behavior

    Traceback (most recent call last):
      File "/path/to/scratch_1.py", line 6, in <module>
        VisualNFA(nfa).show_diagram(view=True)
      File "/path/to/site-packages/visual_automata/fa/nfa.py", line 619, in show_diagram
        all_transitions_pairs = self._transitions_pairs(self.nfa.transitions)
      File "/path/to/site-packages/visual_automata/fa/nfa.py", line 469, in _transitions_pairs
        all_transitions = all_transitions.deepcopy()
    AttributeError: 'frozendict.frozendict' object has no attribute 'deepcopy'
    
    opened by no-preserve-root 3
  • VisualDFA constructor implicitly checks wrapped automaton cardinality

    VisualDFA constructor implicitly checks wrapped automaton cardinality

    The VisualDFA constructor checks the dfa parameter using https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/dfa.py#L34

    This checks if dfa is truthy. Since the DFA class defines a __len__ method (and no __bool__), is is truthy iff len(dfa) != 0. Unfortunately, the length checks the dfa's cardinality, i.e., the size if the input language. For infinite-language DFAs, an exception is then raised. As a result, infinite DFAs cannot be visualized.

    This could be fixed by testing if dfa is None. VisualNFA is not affected since NFA does not define a __len__ method at the moment, but would fail if a similar method would be added to NFA.

    MRE

    Using most recent versions:

    • automata-lib 7.0.1
    • visual_automata 1.1.1
    from automata.fa.dfa import DFA
    from visual_automata.fa.dfa import VisualDFA
    
    dfa = DFA(states={"q0"}, input_symbols={"i0"}, transitions={"q0": {"i0": "q0"}}, initial_state="q0",
              final_states={"q0"})
    VisualDFA(dfa).show_diagram(view=True)
    

    Expected Behavior

    The automaton is shown.

    Actual Behavior

    Traceback (most recent call last):
      File "/path/to/scratch_1.py", line 6, in <module>
        VisualDFA(dfa).show_diagram(view=True)
      File "/path/to/site-packages/visual_automata/fa/dfa.py", line 34, in __init__
        if dfa:
      File "/path/to/site-packages/automata/fa/dfa.py", line 160, in __len__
        return self.cardinality()
      File "/path/to/site-packages/automata/fa/dfa.py", line 792, in cardinality
        raise exceptions.InfiniteLanguageException("The language represented by the DFA is infinite.")
    automata.base.exceptions.InfiniteLanguageException: The language represented by the DFA is infinite.
    

    Workaround

    Manually copying the automaton works:

    VisualDFA(states=dfa.states, input_symbols=dfa.input_symbols, transitions=dfa.transitions,
              initial_state=dfa.initial_state, final_states=dfa.final_states).show_diagram(view=True)
    
    opened by no-preserve-root 1
Releases(1093bea)
Owner
Lewi Uberg
Lewi Uberg
Yes it's true :broken_heart:

Information WARNING: No longer hosted If you would like to be on this repo's readme simply fork or star it! Forks 1 - Flowzii 2 - Errorcrafter 3 - vk-

Dropout 66 Dec 31, 2022
BERTopic is a topic modeling technique that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions

BERTopic BERTopic is a topic modeling technique that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable

Maarten Grootendorst 3.6k Jan 07, 2023
Fastseq 基于ONNXRUNTIME的文本生成加速框架

Fastseq 基于ONNXRUNTIME的文本生成加速框架

Jun Gao 9 Nov 09, 2021
Semantic search for quotes.

squote A semantic search engine that takes some input text and returns some (questionably) relevant (questionably) famous quotes. Built with: bert-as-

cjwallace 11 Jun 25, 2022
Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. What is Lightning Tran

Pytorch Lightning 581 Dec 21, 2022
CATs: Semantic Correspondence with Transformers

CATs: Semantic Correspondence with Transformers For more information, check out the paper on [arXiv]. Training with different backbones and evaluation

74 Dec 10, 2021
The simple project to separate mixed voice (2 clean voices) to 2 separate voices.

Speech Separation The simple project to separate mixed voice (2 clean voices) to 2 separate voices. Result Example (Clisk to hear the voices): mix ||

vuthede 31 Oct 30, 2022
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
StarGAN - Official PyTorch Implementation

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Dec 30, 2022
Various capabilities for static malware analysis.

Malchive The malchive serves as a compendium for a variety of capabilities mainly pertaining to malware analysis, such as scripts supporting day to da

MITRE Cybersecurity 64 Nov 22, 2022
Top2Vec is an algorithm for topic modeling and semantic search.

Top2Vec is an algorithm for topic modeling and semantic search. It automatically detects topics present in text and generates jointly embedded topic, document and word vectors.

Dimo Angelov 2.4k Jan 06, 2023
A multi-voice TTS system trained with an emphasis on quality

TorToiSe Tortoise is a text-to-speech program built with the following priorities: Strong multi-voice capabilities. Highly realistic prosody and inton

James Betker 2.1k Jan 01, 2023
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

Ishtiaq Hussain 2 Feb 10, 2022
Anuvada: Interpretable Models for NLP using PyTorch

Anuvada: Interpretable Models for NLP using PyTorch So, you want to know why your classifier arrived at a particular decision or why your flashy new d

EDGE 102 Oct 01, 2022
Local cross-platform machine translation GUI, based on CTranslate2

DesktopTranslator Local cross-platform machine translation GUI, based on CTranslate2 Download Windows Installer You can either download a ready-made W

Yasmin Moslem 29 Jan 05, 2023
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 05, 2023
A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

NEC Laboratories Europe 13 Sep 08, 2022
Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction ***** New March 31th, 2022: Scikit-Style API for Easy Usage *****

Chia Yew Ken 111 Dec 23, 2022
BERT score for text generation

BERTScore Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020). News: Features to appear in

Tianyi 1k Jan 08, 2023
GPT-3 command line interaction

Writer_unblock Straight-forward command line interfacing with GPT-3. Finding yourself stuck at a conceptual stage? Spinning your wheels needlessly on

Seth Nuzum 6 Feb 10, 2022