Generic framework for historical document processing

Overview

dhSegment

Documentation Status

dhSegment is a tool for Historical Document Processing. Its generic approach allows to segment regions and extract content from different type of documents. See some examples here.

The complete description of the system can be found in the corresponding paper.

It was created by Benoit Seguin and Sofia Ares Oliveira at DHLAB, EPFL.

Installation and usage

The installation procedure and examples of usage can be found in the documentation (see section below).

Demo

Have a try at the demo to train (optional) and apply dhSegment in page extraction using the demo.py script.

Documentation

The documentation is available on readthedocs.

If you are using this code for your research, you can cite the corresponding paper as :

@inproceedings{oliveiraseguinkaplan2018dhsegment,
  title={dhSegment: A generic deep-learning approach for document segmentation},
  author={Ares Oliveira, Sofia and Seguin, Benoit and Kaplan, Frederic},
  booktitle={Frontiers in Handwriting Recognition (ICFHR), 2018 16th International Conference on},
  pages={7--12},
  year={2018},
  organization={IEEE}
}
Comments
  • How to use multilabel prediction type?

    How to use multilabel prediction type?

    when i change prediction_type from CLASSIFICATION' to 'MULTILABEL

    result.shape[1] > 3, "The number of columns should be greater in multi-label framework"
    

    so how to use multi-label?

    Thanks!

    opened by duchengyao 32
  • Can't be installed under Windows

    Can't be installed under Windows

    dhSegment is AWESOME and EXACTLY what my wife and I need for our post-cancer #PayItForward Bonus Round activity doing grassroots #CitizenScience #digitalhumanities research in support of eResearch and machine-learning in the domain of digitization of serial publications, primarily modern commercial magazines. We are working on the development of the #MAGAZINEgts ground-truth storage format providing standards-based (#cidocCRM/FRBRoo/PRESSoo) integrated complex document structure and content depiction models.

    When a tweet about dhSegment surfaced through my feed, I could barely contain myself... we have detailed, multi-valued metadata -- based on a metamodel of fine-grained use of PRESSoo's Issuing Rules -- that describe the location, bounding box, size, shape, number of colors, products featured, etc. for 7,157 advertisements appearing in the 48 issues of Softalk magazine (https://archive.org/details/softalkapple). It will be trivial for me to generate the annotated label images for all these ads as we have already programmatically extracted the ad sub-images from the full pages once we used our "Ad Ferret" to discovery and curate the specification for every ad.

    Once we have a dhSegment instance trained on the Softalk ads, there are over 1.5M pages just within the "collection of collections" of computer magazines at the Internet Archive, and many millions more pages of content in magazines of all types over considerable time periods of their serial publication. The #MAGAZINEgts format, together with brilliant technical achievements like dhSegment, can open new levels of scholarship and machine access to digital collections. We believe dhSegment will be a valuable component for our research platform/framework.

    With great excitement I chased down and have installed and tested the prerequisite CUDA and cuDNN frameworks/platforms under Windows. I have these features now working at the 9.1 version. (This alone was tricky, but I got it working.)

    Unfortunately, the current implementation of the incredibly important dhSegment environment cannot be installed under Windows 10. After the stock Anaconda environment yml file died somewhat dramatically, I then took that file and attempted to search for and install each package individually. (NOTE: I am not a Python expert, so what I report here is subject to refinement by someone who knows better...) Here is what is NOT available under Windows:

    # Python packages for dh_segment not available under Windows
    - dbus=1.12.2
    - fontconfig
    - glib=2.53.6
    - gmp=6.1.2
    - graphite2=1.3.10
    - gst-plugins-base
    - gstreamer=1.12.4
    - harfbuzz=1.7.4
    - jasper=1.900.1
    - libedit=3.1
    - libffi=3.2.1
    - libgcc-ng=7.2.0
    - libgfortran-ng=7.2.0
    - libopus=1.2.1
    - libstdcxx-ng=7.2.0
    - libvpx=1.6.1
    - ncurses=6.0
    - ptyprocess=0.5.2
    - readline=7.0
    - pip:
      - tensorflow-gpu==1.4.1 (I did find and installed 1.8.0 instead) 
    

    Anything not on this list made it into my Windows-based Anaconda environment, the yml for which I have included here as a file attachment.

    win10_dh_segment.yml.txt

    I am so disappointed to not be able to install and use dhSegment under Windows. While a docker image would likely be possible to create, I am skeptical that it would work at the level needed for interfacing with the NVIDIA hardware and its CUDA/cuDNN frameworks, etc. Alternatively, perhaps a cloud-based dev platform would work for us (that is affordable as we are independent and unfunded #CitizenScientists). Your workaround/alternative suggestions are welcome.

    At any rate, sorry for the overly long initial issue posting. But I wanted to explain my and my wife's great interest in this important technology as well as provide what I hope is useful feedback with regard to its potential use under Windows. Looking forward, I am very interested in evolving a collaborative relationship with you good folks of DHLAB.

    ITMT, I am going to generate the labeled training images. :-)

    Happy-Healthy Vibes, FactMiner Jim

    P.S. Here is our #DATeCH2017 poster that will further explain the focus of our research. salmonsbabitsky_factminerssoftalk_poster

    P.P.S. And here is a screenshot showing a typical metadata "spec" for an ad. The simple integer value for the AdLocation is used in concert with an embedded DSL in the fine-grained Issuing Rules of the Advertising Model. This DSL provides a resolution-independent means to describe and compute the upper-left and bounding box of an ad. For example, the four locations of a 1/4 pg sized ad on a page with a 2-column page grid are numbered 1-4, left-to-right top-to-bottom. The proportions of these page segments based on simple geometric proportional computations. magazinegts_documentstructure_advertisements

    And finally, the evolving #MAGAZINEgts for the Softalk magazine collection at the Internet Archive is available here: https://archive.org/download/softalkapple/softalkapple_publication.xml

    opened by Jim-Salmons 9
  • detecting multiple instances of same object

    detecting multiple instances of same object

    Like the way this page shows multiple ornament extraction on same page, My model never detects more than one instance of a similar object.

    I am using the same demo.py as in master branch.

    Can someone help me ?

    opened by ankur7721 4
  • HOW TO USE IT ON TF SERVING BATCH PREDICTION

    HOW TO USE IT ON TF SERVING BATCH PREDICTION

    I have retrained the model using my own dataset, but when I try to get prediction using TF serving using gRPC API call I am not able to pass the images in a batch, it gives out dimensions error but when I pass single image I am able to get predictions. can some help with me on using this model on batch prediction when served.

    opened by anish9 4
  • Original Training image with XML labels to extract data from documents

    Original Training image with XML labels to extract data from documents

    Hi,

    I'm working in a page layout analysis and information extractor and I found that dhSegment might work ok in this task. However, I don't know exactly if dhSegment can work with XML-based anotations (TextRegion, SeparatorRegion, TableRegion, ImageRegion, points defining bounds of each region...) for training besides the RGB styled section definitions. I see in the main page of the project that there is a Layout Analysis example under Use Cases section. That is the case that most resembles to the one I want to implement. Also, I want to extract text from the detected regions.

    How can I do that? Can I still use dhSegment or I have to implement my own detector?

    Thanks.

    Regards.

    opened by Omua 4
  • PredictionType.CLASSIFICATION and extracting rectangles

    PredictionType.CLASSIFICATION and extracting rectangles

    I am attempting CLASSIFICATION now, not MULTILABEL (issue https://github.com/dhlab-epfl/dhSegment/issues/29 was helpful in mentioning that mutually-exclusive areas mean classification, not multilabel. This is clear in retrospect ;^)

    Now I need to extract rectangles and I have hit a big gap in dhSegment. The demo.py code shows how to generate the rectangle corresponding to a skewed page, but there is only one class. I modified demo.py to identify rectangles for each label. When there are multiple classes, there can be spurious, overlapping rectangles.

    How can I:

    1. Identify the highest confidence class instances
    2. That are not overlapping

    The end result I want is one or more jpegs associated with a particular class label plus the coordinates within the input image.

    Perhaps the labels plane in the prediction result offers some help here? demo.py does not use the labels plane.

    opened by tralfamadude 3
  • Feature/table cells

    Feature/table cells

    The PAGE-XML functionality has been extented in order to be able to create TableCell elements. Also an error was fixed which occured when trying to transform a list to points.

    opened by CrazyCrud 3
  • Need a short guide of layout detection and line detection

    Need a short guide of layout detection and line detection

    Hello, I have a large collection of scans of written text in table forms with complex layout structure and printed only vertical borders. My plan the a segmentation table rows cell by cell ,line detection inside each cell and then a trial of recognition. I passed through dhSegment demo,it'sok but met problems with operations. Could you please provide any examples of use cases described in the overview https://dhlab-epfl.github.io/dhSegment/ ? I'm ready to label training dataset from my collection but cannot get a start. Any notebook or video guide? One more question is about READ-BAD dataset that was suggested in a couple of issues discussions. I see the article PDF in arxive.org but didn't find a link to download the image collection. What did I miss?

    opened by longwall 3
  • A more efficient neural architecture

    A more efficient neural architecture

    @solivr @SeguinBe Thank you for your hard work,

    Can you merge Mobilenet v2 with master, along with adding a demo for using it. Thank you

    Waiting for your reply

    opened by mrocr 3
  • Convert generated VIA binary masks (black and white) into RGB expected format

    Convert generated VIA binary masks (black and white) into RGB expected format

    First, thanks for your work !

    I tried to create masks from VIA project file (doc here). It works but how to convert the black and white generated masks into RGB masks (with classes.txt) ?

    I may have missed something but I did not find the code to do it.

    Thanks for your help !

    opened by loic001 3
  • Model loading/training error

    Model loading/training error

    When executing the following command: python train.py with demo/demo_config.json I get this error. FYI I've followed the installation instructions with conda.

    InternalError (see above for traceback): cuDNN launch failure : input shape([1,3,1095,538]) filter shape([7,7,3,64]) [[{{node resnet_v1_50/conv1/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/resnet_v1_50/conv1/Conv2D_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, resnet_v1_50/conv1/weights/read)]]

    opened by lvaleriu 3
  • Suggest to loosen the dependency on sacred

    Suggest to loosen the dependency on sacred

    Hi, your project dhSegment(commit id: cca94e94aec52baa9350eaaa60c006d7fde103b7) requires "sacred==0.7.4" in its dependency. After analyzing the source code, we found that the following versions of sacred can also be suitable, i.e., sacred 0.7.3, since all functions that you directly (1 APIs: sacred.experiment.Experiment.init) or indirectly (propagate to 19 sacred's internal APIs and 17 outsider APIs) used from the package have not been changed in these versions, thus not affecting your usage.

    Therefore, we believe that it is quite safe to loose your dependency on sacred from "sacred==0.7.4" to "sacred>=0.7.3,<=0.7.4". This will improve the applicability of dhSegment and reduce the possibility of any further dependency conflict with other projects.

    May I pull a request to further loosen the dependency on sacred?

    By the way, could you please tell us whether such an automatic tool for dependency analysis may be potentially helpful for maintaining dependencies easier during your development?

    opened by Agnes-U 0
  • Performance issue in the definition of model_fn, dh_segment/estimator_fn.py(P1)

    Performance issue in the definition of model_fn, dh_segment/estimator_fn.py(P1)

    Hello, I found a performance issue in the definition of model_fn, dh_segment/estimator_fn.py, tf.cast(tf.shape(network_output)[1:3] will be calculated repeatedly during program execution, resulting in reduced efficiency. I think it should be created before the loop.

    Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

    opened by DLPerf 0
  • Tensorflow 2.4 (request for permission to upgrade this repo to this)

    Tensorflow 2.4 (request for permission to upgrade this repo to this)

    Hi!

    I have locally upgraded this repo to Tensorflow 2.4.1. I thought it might be helpful if I shared this code with you. If you would like I can create a pull request with this update for the repo, I would just need permissions to do so. Let me know!

    Toby

    opened by tobyDickinson 3
  • Reproducing baseline detection results

    Reproducing baseline detection results

    Hello,

    I'm trying to reproduce the baseline detection results in your paper. What was the training/validation split used? Also, is it the case that demo/demo_cbad_config.json is the same configuration used to achieve your results? Thank you!

    opened by jason-vega 0
  • Speed of Inference on GeForce GTX 1080

    Speed of Inference on GeForce GTX 1080

    My testing based on a variation of demo.py for classification of 7 labels/classes is showing choppy performance on a GPU. Excluding python post-processing and ignoring the first two inferences, I see processing durations like 0.09, 0.089, 0.56, 0.56, 0.079, 0.39, 0.09 ... ; average over 19 images is 0.19sec per image.

    I'm surprised by the variance.

    At 5/sec it is workable, but could be better. Would tensorflow-serving help by getting python out of the loop? I need to process 1M images per day.

    (The GPU is GeForce GTX 1080 and is using 10.8GB of 11GB RAM, only one TF session is used for multiple inferences.)

    opened by tralfamadude 1
  • Mulilabel limitation should be documented

    Mulilabel limitation should be documented

    Only 7 labels are supported and this is not documented. Since effort can be expended to prepare training data, finding out this limitation when running train.py is wasteful.

    opened by tralfamadude 1
[ICCV, 2021] Cloud Transformers: A Universal Approach To Point Cloud Processing Tasks

Cloud Transformers: A Universal Approach To Point Cloud Processing Tasks This is an official PyTorch code repository of the paper "Cloud Transformers:

Visual Understanding Lab @ Samsung AI Center Moscow 27 Dec 15, 2022
Detect handwritten words in a text-line (classic image processing method).

Word segmentation Implementation of scale space technique for word segmentation as proposed by R. Manmatha and N. Srimal. Even though the paper is fro

Harald Scheidl 190 Jan 03, 2023
Super Mario Game With Python

Super_Mario Hello all this is a simple python program which tries to use our body as a controller for the super mario game Here I have used media pipe

Adarsh Badagala 219 Nov 25, 2022
Creating a virtual tv using opencv in python3.

Virtual-TV Creating a virtual tv using opencv in python3. In order to run the code follow the below given steps: Make sure the desired videos which ar

Vamsi 1 Jan 01, 2022
Distilling Knowledge via Knowledge Review, CVPR 2021

ReviewKD Distilling Knowledge via Knowledge Review Pengguang Chen, Shu Liu, Hengshuang Zhao, Jiaya Jia This project provides an implementation for the

DV Lab 194 Dec 28, 2022
Source code of RRPN ---- Arbitrary-Oriented Scene Text Detection via Rotation Proposals

Paper source Arbitrary-Oriented Scene Text Detection via Rotation Proposals https://arxiv.org/abs/1703.01086 News We update RRPN in pytorch 1.0! View

428 Nov 22, 2022
🔎 Like Chardet. 🚀 Package for encoding & language detection. Charset detection.

Charset Detection, for Everyone 👋 The Real First Universal Charset Detector A library that helps you read text from an unknown charset encoding. Moti

TAHRI Ahmed R. 332 Dec 31, 2022
Python Computer Vision application that allows users to draw/erase on the screen using their webcam.

CV-Virtual-WhiteBoard The Virtual WhiteBoard is a project I made using the OpenCV and Mediapipe Python libraries. Using your index and middle finger y

Stephen Wang 1 Jan 07, 2022
FOTS Pytorch Implementation

News!!! Recognition branch now is added into model. The whole project has beed optimized and refactored. ICDAR Dataset SynthText 800K Dataset detectio

Ning Lu 599 Dec 19, 2022
Face_mosaic - Mosaic blur processing is applied to multiple faces appearing in the video

動機 face_recognitionを使用して得られる顔座標は長方形であり、この座標をそのまま用いてぼかし処理を行った場合得られる画像は醜い。 それに対してモ

Yoshitsugu Kesamaru 6 Feb 03, 2022
chineseocr/table_line 表格线检测模型pytorch版

table_line_pytorch chineseocr/table_detct 表格线检测模型table_line pytorch版 原项目github: https://github.com/chineseocr/table-detect 1、模型转换 下载原项目table_detect模型文

1 Oct 21, 2021
Detect the mathematical formula from the given picture and the same formula is extracted and converted into the latex code

Mathematical formulae extractor The goal of this project is to create a learning based system that takes an image of a math formula and returns corres

6 May 22, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
Hiiii this is the Spanish for Linux and win 10 and in the near future the english version of PortScan my new tool on which you can see what ports are Open only with the IP adress.

PortScanner-by-IIT PortScanner es una herramienta programada en Python3. Como su nombre indica esta herramienta escanea los primeros 150 puertos de re

5 Sep 19, 2022
Tesseract Open Source OCR Engine (main repository)

Tesseract OCR About This package contains an OCR engine - libtesseract and a command line program - tesseract. Tesseract 4 adds a new neural net (LSTM

48.4k Jan 09, 2023
A program that takes in the hand gesture displayed by the user and translates ASL.

Interactive-ASL-Recognition Using the framework mediapipe made by google, OpenCV library and through self teaching, I was able to create a program tha

Riddhi Bajaj 3 Nov 22, 2021
✌️Using this you can control your PC/Laptop volume by Hand Gestures created with Python.

Hand Gesture Volume Controller ✋ Hand recognition 👆 Finger recognition 🔊 you can decrease and increase volume Demo Code Firstly I have created a Mod

Abbas Ataei 19 Nov 17, 2022
Automatically remove the mosaics in images and videos, or add mosaics to them.

Automatically remove the mosaics in images and videos, or add mosaics to them.

Hypo 1.4k Dec 30, 2022
This is a pytorch re-implementation of EAST: An Efficient and Accurate Scene Text Detector.

EAST: An Efficient and Accurate Scene Text Detector Description: This version will be updated soon, please pay attention to this work. The motivation

Dejia Song 544 Dec 20, 2022
Optical character recognition for Japanese text, with the main focus being Japanese manga

Manga OCR Optical character recognition for Japanese text, with the main focus being Japanese manga. It uses a custom end-to-end model built with Tran

Maciej Budyś 327 Jan 01, 2023