MMDA - multimodal document analysis

Related tags

Text Data & NLPmmda
Overview

MMDA - multimodal document analysis

This is work in progress...

Setup

conda create -n mmda python=3.8
pip install -r requirements.txt

Parsers

  • SymbolScraper - Apache 2.0

    • Quoted from their README: From the main directory, issue make. This will run the Maven build system, download dependencies, etc., compile source files and generate .jar files in ./target. Finally, a bash script bin/sscraper is generated, so that the program can be easily used in different directories.

Library walkthrough

1. Creating a Document for the first time

In this example, we use the SymbolScraperParser. Each parser implements its own .parse().

import os
from mmda.parsers.symbol_scraper_parser import SymbolScraperParser
from mmda.types.document import Document

ssparser = SymbolScraperParser(sscraper_bin_path='...')
doc: Document = ssparser.parse(infile='...pdf', outdir='...', outfname='...json')

Because we provided outdir and outfname, the document is also serialized for you:

assert os.path.exists(os.path.join(outdir, outfname))

2. Loading a serialized Document

Each parser implements its own .load().

doc: Document = ssparser.load(infile=os.path.join(outdir, outfname))

3. Iterating through a Document

The minimum requirement for a Document is its .text field, which is just a .

But the usefulness of this library really is when you have multiple different ways of segmenting the .text. For example:

for page in doc.pages:
    print(f'\n=== PAGE: {page.id} ===\n\n')
    for row in page.rows:
        print(row.text)

shows two nice aspects of this library:

  • Document provides iterables for different segmentations of text. Options include pages, tokens, rows, sents, blocks. Not every Parser will provide every segmentation, though. For example, SymbolScraperParser only provides pages, tokens, rows.

  • Each one of these segments (precisely, DocSpan objects) is aware of (and can access) other segment types. For example, you can call page.rows to get all Rows that intersect a particular Page. Or you can call sent.tokens to get all Tokens that intersect a particular Sentence. Or you can call sent.block to get the Block(s) that intersect a particular Sentence. These indexes are built dynamically when the Document is created and each time a new DocSpan type is loaded. In the extreme, one can do:

for page in doc.pages:
    for block in page.blocks:
        for sent in block.sents:
            for row in sent.rows:
                for token in sent.tokens:
                    pass

4. Loading new DocSpan type

Not all Documents will have all segmentations available at creation time. You may need to load new definitions to an existing Document.

It's strongly recommended to create the full Document using a Parser.load() but if you need to build it up step by step using the DocSpan class and Document.load() method:

from mmda.types.span import Span
from mmda.types.document import Document, DocSpan, Token, Page, Row, Sent, Block

doc: Document(text='I live in New York. I read the New York Times.')
page_jsons = [{'start': 0, 'end': 46, 'id': 0}]
sent_jsons = [{'start': 0, 'end': 19, 'id': 0}, {'start': 20, 'end': 46, 'id': 1}]

pages = [
    DocSpan.from_span(span=Span.from_json(span_json=page_json), 
                      doc=doc, 
                      span_type=Page)
    for page_json in page_jsons
]
sents = [
    DocSpan.from_span(span=Span.from_json(span_json=sent_json), 
                      doc=doc, 
                      span_type=Sent)
    for sent_json in sent_jsons
]

doc.load(sents=sents, pages=pages)

assert doc.sents
assert doc.pages

5. Changing the Document

We currently don't support any nice tools for mutating the data in a Document once it's been created, aside from loading new data. Do at your own risk.

But a note -- If you're editing something (e.g. replacing some DocSpan in tokens), always call:

Document._build_span_type_to_spans()
Document._build_span_type_to_index()

to keep the indices up-to-date with your modified DocSpan.

Comments
  • VILA predictor service

    VILA predictor service

    https://github.com/allenai/scholar/issues/29184

    REST service for VILA predictors

    Any code that I didn't comment on in this PR is s2age-maker boilerplate.

    opened by rodneykinney 12
  • cleanup Annotation class: remove `uuid`, pull `id` out of `metadata`, remove `dataclasses`, add `getter/setter` for `text` and `type`, make `Metadata()` take args

    cleanup Annotation class: remove `uuid`, pull `id` out of `metadata`, remove `dataclasses`, add `getter/setter` for `text` and `type`, make `Metadata()` take args

    @soldni im tryin to migrate off dataclass, but the tests are failing at:

    ERROR tests/test_eval/test_metrics.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_internal_ai2/test_api.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_parsers/test_grobid_header_parser.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_parsers/test_override.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_parsers/test_pdf_plumber_parser.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_bibentry_predictor.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_dictionary_word_predictor.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_span_group_classification_predictor.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_vila_predictors.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_document.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_indexers.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_json_conversion.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_metadata.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_span_group.py - TypeError: add_deprecated_field only works on dataclasses
    

    not sure how best to handle

    opened by kyleclo 7
  • Add attributes to API data classes

    Add attributes to API data classes

    This PR adds metadata to API data classes.

    API data classes and mmda types differ in a few significant aspects:

    • id and type (and text for SpanGroup) are stored in metadata for mmda types; in the APIs, they are part of the top-level attributes.
    • metadata can store arbitrary content in the mmda types; in the data API, all attributes that are not explicitly declared are dropped.
      • we expect applications that require specific fields to declare custom Metadata and SpanGroup/BoxGroup classes that inherit from their parent class. For an example, see tests/test_internal_ai2/test_api.py
      • metadata entries are mapped to an attributes field to match how data is stored in the Annotation Store
    opened by soldni 4
  • Egork/merge spans

    Egork/merge spans

    Adding a class to the utils for merging spans with optional parameter of x, y. x, y are used as a distance added to the boundaries of the boxes to decided if they overlap.

    Here is an example of tokens which are represented by list of spans, the task is to merge them into a single span and box image

    Result of merging tokens with x=0.04387334, y=0.01421097 being average size of the token in the document image

    Another example with same x=0.04387334, y=0.01421097

    image image

    opened by comorado 4
  • Bib Entry Parser/Predictor

    Bib Entry Parser/Predictor

    https://github.com/allenai/scholar/issues/32461

    Pretty standard model and interface implementation. You can see the result on http://bibentry-predictor.v0.dev.models.s2.allenai.org/ or

    curl -X 'POST' \
      'http://bibentry-predictor.v0.dev.models.s2.allenai.org/invocations' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "instances": [
        {
          "bib_entry": "[4] Wei Zhuo, Qianyi Zhan, Yuan Liu, Zhenping Xie, and Jing Lu. Context attention heterogeneous network embed- ding. Computational Intelligence and Neuroscience , 2019. doi: 10.1155/2019/8106073."
        }
      ]
    }'
    

    Tests: integration test passed and dev deployment works

    TODO: release as version 0.0.10 after merge

    opened by stefanc-ai2 4
  • Dockerized pipeline

    Dockerized pipeline

    Pattern for wrapping services in a lighter-weight containers.

    • Replace full sub-projects in the services directory with a Dockerfile plus a xxx_api.py file for each service.
    • Add a docker-compose file that build the services plus a python container for running a REPL or scripts
    • Add pipeline.py sample end-to-end pipeline script using the services
    • Add run-pipeline.sh file to run pipeline

    @rauthur

    opened by rodneykinney 4
  • MMDA predictor evaluation

    MMDA predictor evaluation

    • [x] Grobid implemented as a Parser
    • [x] Script to obtain S2-VLUE (check w/ shannons on location)
    • [x] Script to run S2-VLUE through an mmda.Predictor & obtain evaluation metrics
    • [x] Definition/implementation of end-to-end evaluation metrics.
    S2-VLUE evaluation looks like this:
    [('VILA', 'title'), ('a', 'title'), ('new', 'title')...] 
    and evaluation associated with it in VILA paper is token-level F1.
    
    But if we want to compare against GROBID or other systsems that use different parsers (i.e. different ._symbols and .tokens), what we want for evaluation is
    {'title': 'VILA a new...', ...}
    
    Our evaluation metric is based on string match / edit-distance metrics.  
    

    Focus for now on title and abstract. Other S2-VLUE classes may not even exist in Grobid or the other tools we want to compare against. If we have time, other types of content-types we'd want are:

    • Section names
    • Author names
    • Bibliographies (split out; optionally, also-parsed)
    • Body text
    • Captions
    • Footnotes
    • Tables/Figures
    opened by kyleclo 4
  • Speed up vila pre-processing

    Speed up vila pre-processing

    From earlier testing, I remember that convert_document_page_to_pdf_dict takes a significant fraction of the total prediction time for vila. Here's a simple change that does all the work in a single iteration over tokens instead of multiple list comprehensions. I tested this on some production PDFs and saw a 3x speed-up.

    @cmwilhelm @yoganandc

    https://github.com/allenai/scholar/issues/32695

    opened by rodneykinney 3
  • Kylel/2022 09/span group utils

    Kylel/2022 09/span group utils

    minor PR: added tests for a pretty important functionality that was being used -- how to combine Spans that are next to each other into a single big Span. the key thing that was undocumented was that Boxes for the underlying Spans actually disappear after this merging functionality, which the tests now capture.

    dont think this is intended behavior we want to support in future, but for now, this is how this utility is being used

    opened by kyleclo 2
  • `Document._annotate_box_group` returns empty SpanGroups

    `Document._annotate_box_group` returns empty SpanGroups

    Very bizarre bug I've encounter when trying to annotate a document with blocks from layout parser. To reproduce, run following code:

    from mmda.parsers.pdfplumber_parser import PDFPlumberParser
    from mmda.rasterizers.rasterizer import PDF2ImageRasterizer
    from mmda.predictors.lp_predictors import LayoutParserPredictor
    
    import torch
    import warnings
    from cached_path import cached_path        # must install via `pip install cached_path`
    
    
    pdfplumber_parser = PDFPlumberParser()
    rasterizer = PDF2ImageRasterizer()
    layout_predictor = LayoutParserPredictor.from_pretrained(
        "lp://efficientdet/PubLayNet"
    )
    
    path = str(cached_path('https://arxiv.org/pdf/2110.08536.pdf'))
    doc = pdfplumber_parser.parse(path)
    images = rasterizer.rasterize(input_pdf_path=path, dpi=72)
    doc.annotate_images(images)
    
    with torch.no_grad(), warnings.catch_warnings():
        layout_regions = layout_predictor.predict(doc)
        doc.annotate(blocks=layout_regions)
    
    # these asserts should fail
    assert doc.blocks[0].spans == []
    assert doc.blocks[0].tokens == []
    

    I've done a bit of poking around and it seems to stem from the following snipped of code in mmda/types/document.py:

    derived_span_groups = sorted(
        derived_span_groups, key=lambda span_group: span_group.start
    )
    

    In particular, it seems like the spans attribute for each SpanGroup gets emptied after sorting.

    No clue what would be causing this, but perhaps there's an explanation?

    opened by soldni 2
  • VILA models crashing when bounding boxes are not int

    VILA models crashing when bounding boxes are not int

    VILA models crashing when bounding boxes are not int

    Because of changes in #69 , bounding boxes are now float instead of int, which VILA does not like:

      File "/Users/lucas/miniforge3/envs/pdod/lib/python3.10/site-packages/mmda/predictors/hf_predictors/vila_predictor.py", line 161, in predict
        model_outputs = self.model(**self.model_input_collator(model_inputs))
      File "/Users/lucas/miniforge3/envs/pdod/lib/python3.10/site-packages/mmda/predictors/hf_predictors/vila_predictor.py", line 178, in model_input_collator
        return {
      File "/Users/lucas/miniforge3/envs/pdod/lib/python3.10/site-packages/mmda/predictors/hf_predictors/vila_predictor.py", line 179, in <dictcomp>
        key: torch.tensor(val, dtype=torch.int64, device=self.device)
    TypeError: 'float' object cannot be interpreted as an integer
    

    This PR adds an explicit cast operation to get around this issue during pre-processing.

    opened by soldni 2
  • Bib predictor index error bug fix

    Bib predictor index error bug fix

    attempt at fix for part 1 of https://github.com/allenai/scholar/issues/34858 I think we can work around the index error that keeps popping up this way.

    tt verify integration test passes

    next steps:

    • [ ] tt push
    • [ ] update timo-services config for bib-predictor to use new code which will trigger new deployment
    opened by geli-gel 4
  • Add fontinfo to tokens without requiring word split

    Add fontinfo to tokens without requiring word split

    This PR appends font information (font name and size) as metadata to 'tokens' on a Document without requiring tokens to be split on that information (i.e., "best" effort if a token contains many font names or sizes). The code subclasses the WordExtractor provided by PDFPlumber.

    Currently, in a default configuration of PDFPlumberParser, tokens are already extracted with font name and size information (although it is discarded and only used for splitting). This could be added to the metadata as-is, however, the method used is the extra_attrs argument of PDFPlumber's extract_words which forces token splitting if font name and size does not match. I believe this is a bad default and further is not required (users can override this argument). The approach here guarantees this metadata will be captured.

    Re: "best effort" in case of many name/size options for a token: I have maintained the logic of just taking the font name and size from the first character in the token. Another approach provides the min, max and average font sizes (or others) as well as a set of font names. This could be provided in the future or by adapting the append_attrs argument. For current use cases (section nesting prediction) this approach is sufficient.

    opened by rauthur 0
  • Incomplete sentences in README

    Incomplete sentences in README

    I was going through the readme and noticed a couple sentences that start, but don't end. Since I'm new to the project, I don't know how to finish the sentences myself.

    Screen Shot 2022-11-22 at 8 57 37 AM Screen Shot 2022-11-22 at 8 57 17 AM documentation 
    opened by dmh43 0
  • cleanup JSON conversion for all data types

    cleanup JSON conversion for all data types

    Noticed JSON serialization had inconsistent behavior across various data types, especially in cases where certain fields were empty or None.

    This PR adds a set of comprehensive tests in tests/test_types/test_json_conversion.py that documents these behaviors. PR also includes resolving inconsistencies. For example, now Metadata that's attached to a SpanGroup or BoxGroup won't get accidentally serialized as an empty dictionary.

    opened by kyleclo 0
  • adding relations

    adding relations

    This PR extends this library functionality substantially -- Adding a new Annotation type called Relation. A Relation is a link between 2 annotations (e.g. a Citation linked to its Bib Entry). The input Annotations are called key and value.

    A few things needed to change to support Relations:

    Annotation Names

    Relations store references to Annotation objects. But we didn't want Relation.to_json() to also .to_json() those objects. We only want to store minimal identifiers of the key and value. Something short like bib_entry-5 or sentence-13. We call these short strings names.

    To do this, we added to Annotation class, an optional attribute called field: str which stores this name. It's automatically populated when you run Document.annotate(new_field = list_of_annotations); each of those input annotations will have the new field name stored under .field.

    We also added a method name() that returns the name of a particular Annotation object that is unique at the document-level. Names are a minimal class that basically stores .field and .id.

    In short, now after you annotate a Document with annotations, you can do stuff like:

    doc.tokens[15].name   ==   AnnotationName(field='tokens', id=15)
    str(annotation_name)  ==   'tokens-15'
    AnnotationName.from_str('tokens-15')  ==  AnnotationName(field='tokens', id=15)
    

    Lookups based on names

    To support reconstructing a Relation object given the names of key and value, we need the ability to lookup those involved Annotations. We introduce a new method to enable this:

    annotation_name = AnnotationName.from_str('paragraphs-99')
    a = document.locate_annotation( annotation_name )   -->  returns the specific Annotation object
    assert a.id == 99
    assert a.field == 'paragraphs'
    

    to and from JSON

    Finally, we need some way to serializing to JSON and reconstructing from JSON. For serialization, now that we have Names, this makes the JSON quite minimal:

    {'key': <name_of_key>, 'value': <name_of_value>, ...other stuff that all Annotation objects have,  like Metadata...}
    

    Reconstructing a Relation from JSON is more tricky because it's meaningless without a Document object. The Document object must also store the specific Annotations correctly so we can correctly perform the lookup based on these Names.

    The API for this is similar, but you must also pass in the Document object:

    relation = Relation.from_json(my_relation_dict, my_document_containing_necessary_fields)
    
    opened by kyleclo 2
Releases(0.2.7)
Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

This Repository contains a sample code for Tacotron 2, WaveGlow with multi-speaker, emotion embeddings together with a script for data preprocessing.

Ivan Didur 106 Jan 01, 2023
Honor's thesis project analyzing whether the GPT-2 model can more effectively generate free-verse or structured poetry.

gpt2-poetry The following code is for my senior honor's thesis project, under the guidance of Dr. Keith Holyoak at the University of California, Los A

Ashley Kim 2 Jan 09, 2022
CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT This repo provides the code for reproducing the experiments in CodeBERT: A Pre-Trained Model for Programming and Natural Languages. CodeBERT

Microsoft 1k Jan 03, 2023
Blender addon - Scrub timeline from viewport with a shortcut

Viewport scrub timeline Move in the timeline directly in viewport and snap to nearest keyframe Note : This standalone feature will be added in the nat

Samuel Bernou 40 Nov 07, 2022
Fake Shakespearean Text Generator

Fake Shakespearean Text Generator This project contains an impelementation of stateful Char-RNN model to generate fake shakespearean texts. Files and

Recep YILDIRIM 1 Feb 15, 2022
[ICLR 2021 Spotlight] Pytorch implementation for "Long-tailed Recognition by Routing Diverse Distribution-Aware Experts."

RIDE: Long-tailed Recognition by Routing Diverse Distribution-Aware Experts. by Xudong Wang, Long Lian, Zhongqi Miao, Ziwei Liu and Stella X. Yu at UC

Xudong (Frank) Wang 205 Dec 16, 2022
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).

BERT-of-Theseus Code for paper "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing". BERT-of-Theseus is a new compressed BERT by progre

Kevin Canwen Xu 284 Nov 25, 2022
Dope Wars game engine on StarkNet L2 roll-up

RYO Dope Wars game engine on StarkNet L2 roll-up. What TI-83 drug wars built as smart contract system. Background mechanism design notion here. Initia

104 Dec 04, 2022
Lyrics generation with GPT2-based Transformer

HuggingArtists - Train a model to generate lyrics Create AI-Artist in just 5 minutes! 🚀 Run the demo notebook to train 🚀 Run the GUI demo to test Di

Aleksey Korshuk 65 Dec 19, 2022
Hostapd-mac-tod-acl - Setup a hostapd AP with MAC ToD ACL

A brief explanation This script provides a quick way to setup a Time-of-day (Tod

2 Feb 03, 2022
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)

ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python) 日本語は以下に続きます (Japanese follows) English: This book is written in Japanese and primaril

Ryuichi Yamamoto 189 Dec 29, 2022
The implementation of Parameter Differentiation based Multilingual Neural Machine Translation

The implementation of Parameter Differentiation based Multilingual Neural Machine Translation .

Qian Wang 21 Dec 17, 2022
Задания КЕГЭ по информатике 2021 на Python

КЕГЭ 2021 на Python В этом репозитории мои решения типовых заданий КЕГЭ по информатике в 2021 году, БЕСПЛАТНО! Задания Взяты с https://inf-ege.sdamgia

8 Oct 13, 2022
This project deals with a simplified version of a more general problem of Aspect Based Sentiment Analysis.

Aspect_Based_Sentiment_Extraction Created on: 5th Jan, 2022. This project deals with an important field of Natural Lnaguage Processing - Aspect Based

Naman Rastogi 4 Jan 01, 2023
Kurumi ChatBot

KurumiChatBot Just another Telegram AI chat bot written in Python using Pyrogram. A public running instance can be found on telegram as @TokisakiChatB

Yoga Pranata 3 Jun 28, 2022
Wrapper to display a script output or a text file content on the desktop in sway or other wlroots-based compositors

nwg-wrapper This program is a part of the nwg-shell project. This program is a GTK3-based wrapper to display a script output, or a text file content o

Piotr Miller 94 Dec 27, 2022
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
DeLighT: Very Deep and Light-Weight Transformers

DeLighT: Very Deep and Light-weight Transformers This repository contains the source code of our work on building efficient sequence models: DeFINE (I

Sachin Mehta 440 Dec 18, 2022
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

317 Dec 23, 2022
Search Git commits in natural language

NaLCoS - NAtural Language COmmit Search Search commit messages in your repository in natural language. NaLCoS (NAtural Language COmmit Search) is a co

Pushkar Patel 50 Mar 22, 2022