Official PyTorch implementation for "Mixed supervision for surface-defect detection: from weakly to fully supervised learning"

Overview

Mixed supervision for surface-defect detection: from weakly to fully supervised learning [Computers in Industry 2021]

Official PyTorch implementation for "Mixed supervision for surface-defect detection: from weakly to fully supervised learning" published in journal Computers in Industry 2021.

The same code is also an offical implementation of the method used in "End-to-end training of a two-stage neural network for defect detection" published in International Conference on Pattern Recognition 2020.

Citation

Please cite our Computers in Industry 2021 paper when using this code:

@article{Bozic2021COMIND,
  author = {Bo{\v{z}}i{\v{c}}, Jakob and Tabernik, Domen and 
  Sko{\v{c}}aj, Danijel},
  journal = {Computers in Industry},
  title = {{Mixed supervision for surface-defect detection: from weakly to fully supervised learning}},
  year = {2021}
}

How to run:

Requirements

Code has been tested to work on:

  • Python 3.8
  • PyTorch 1.6, 1.8
  • CUDA 10.0, 10.1
  • using additional packages as listed in requirements.txt

Datasets

You will need to download the datasets yourself. For DAGM and Severstal Steel Defect Dataset you will also need a Kaggle account.

  • DAGM available here.
  • KolektorSDD available here.
  • KolektorSDD2 available here.
  • Severstal Steel Defect Dataset available here.

For details about data structure refer to README.md in datasets folder.

Cross-validation splits, train/test splits and weakly/fully labeled splits for all datasets are located in splits directory of this repository, alongside the instructions on how to use them.

Using on other data

Refer to README.md in datasets for instructions on how to use the method on other datasets.

Demo - fully supervised learning

To run fully supervised learning and evaluation on all four datasets run:

./DEMO.sh
# or by specifying multiple GPU ids 
./DEMO.sh 0 1 2

Results will be written to ./results folder.

Replicating paper results

To replicate the results published in the paper run:

./EXPERIMENTS_COMIND.sh
# or by specifying multiple GPU ids 
./EXPERIMENTS_COMIND.sh 0 1 2

To replicate the results from ICPR 2020 paper:

@misc{Bozic2020ICPR,
    title={End-to-end training of a two-stage neural network for defect detection},
    author={Jakob Božič and Domen Tabernik and Danijel Skočaj},
    year={2020},
    eprint={2007.07676},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

run:

./EXPERIMENTS_ICPR.sh
# or by specifying multiple GPU ids 
./EXPERIMENTS_ICPR.sh 0 1 2

Results will be written to ./results-comind and ./results-icpr folders.

Usage of training/evaluation code

The following python files are used to train/evaluate the model:

  • train_net.py Main entry for training and evaluation
  • models.py Model file for network
  • data/dataset_catalog.py Contains currently supported datasets

In order to train and evaluate a network you can also use EXPERIMENTS_ROOT.sh, which contains several functions that will make training and evaluation easier for you. For more details see the file EXPERIMENTS_ROOT.sh.

Running code

Simplest way to train and evaluate a network is to use EXPERIMENTS_ROOT.sh, you can see examples of use in EXPERIMENTS_ICPR.sh and in EXPERIMENTS_COMIND.sh

If you wish to do it the other way you can do it by running train_net.py and passing the parameters as keyword arguments. Bellow is an example of how to train a model for a single fold of KSDD dataset.

python -u train_net.py  \
    --GPU=0 \
    --DATASET=KSDD \
    --RUN_NAME=RUN_NAME \
    --DATASET_PATH=/path/to/dataset \
    --RESULTS_PATH=/path/to/save/results \
    --SAVE_IMAGES=True \
    --DILATE=7 \
    --EPOCHS=50 \
    --LEARNING_RATE=1.0 \
    --DELTA_CLS_LOSS=0.01 \
    --BATCH_SIZE=1 \
    --WEIGHTED_SEG_LOSS=True \
    --WEIGHTED_SEG_LOSS_P=2 \
    --WEIGHTED_SEG_LOSS_MAX=1 \
    --DYN_BALANCED_LOSS=True \
    --GRADIENT_ADJUSTMENT=True \
    --FREQUENCY_SAMPLING=True \
    --TRAIN_NUM=33 \
    --NUM_SEGMENTED=33 \
    --FOLD=0

Some of the datasets do not require you to specify --TRAIN_NUM or --FOLD- After training, each model is also evaluated.

For KSDD you need to combine the results of evaluation from all three folds, you can do this by using join_folds_results.py:

python -u join_folds_results.py \
    --RUN_NAME=SAMPLE_RUN \
    --RESULTS_PATH=/path/to/save/results \
    --DATASET=KSDD 

You can use read_results.py to generate a table of results f0r all runs for selected dataset.
Note: The model is sensitive to random initialization and data shuffles during the training and will lead to different performance with different runs unless --REPRODUCIBLE_RUN is set.

Owner
ViCoS Lab
ViCoS Lab
👄 The most accurate natural language detection library for Java and the JVM, suitable for long and short text alike

Quick Info this library tries to solve language detection of very short words and phrases, even shorter than tweets makes use of both statistical and

Peter M. Stahl 532 Dec 28, 2022
Repository collecting all the submodules for the new PyTorch-based OCR System.

OCRopus3 is being replaced by OCRopus4, which is a rewrite using PyTorch 1.7; release should be soonish. Please check github.com/tmbdev/ocropus for up

NVIDIA Research Projects 138 Dec 09, 2022
原神风花节自动弹琴辅助

GenshinAutoPlayBalladsofBreeze 原神风花节自动弹琴辅助(已适配1920*1080分辨率) 本程序基于opencv图像识别技术,不存在任何封号。 因为正确率取决于你的cpu性能,10900k都不一定全对。 由于图像识别存在误差,根本无法确定出错时间。更不用说被检测到了。

晓轩 20 Oct 27, 2022
CNN+LSTM+CTC based OCR implemented using tensorflow.

CNN_LSTM_CTC_Tensorflow CNN+LSTM+CTC based OCR(Optical Character Recognition) implemented using tensorflow. Note: there is No restriction on the numbe

Watson Yang 356 Dec 08, 2022
A dataset handling library for computer vision datasets in LOST-fromat

A dataset handling library for computer vision datasets in LOST-fromat

8 Dec 15, 2022
An OCR evaluation tool

dinglehopper dinglehopper is an OCR evaluation tool and reads ALTO, PAGE and text files. It compares a ground truth (GT) document page with a OCR resu

QURATOR-SPK 40 Dec 20, 2022
MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition

MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition Python 2.7 Python 3.6 MORAN is a network with rectification mechanism for

Canjie Luo 595 Dec 27, 2022
Generate text images for training deep learning ocr model

New version release:https://github.com/oh-my-ocr/text_renderer Text Renderer Generate text images for training deep learning OCR model (e.g. CRNN). Su

Qing 1.2k Jan 04, 2023
Autonomous Driving project for Euro Truck Simulator 2

hope-autonomous-driving Autonomous Driving project for Euro Truck Simulator 2 Video: How is it working ? In this video, the program processes the imag

Umut Görkem Kocabaş 36 Nov 06, 2022
Detect text blocks and OCR poorly scanned PDFs in bulk. Python module available via pip.

doc2text doc2text extracts higher quality text by fixing common scan errors Developing text corpora can be a massive pain in the butt. Much of the tex

Joe Sutherland 1.3k Jan 04, 2023
Text page dewarping using a "cubic sheet" model

page_dewarp Page dewarping and thresholding using a "cubic sheet" model - see full writeup at https://mzucker.github.io/2016/08/15/page-dewarping.html

Matt Zucker 1.2k Dec 29, 2022
TextBoxes: A Fast Text Detector with a Single Deep Neural Network https://github.com/MhLiao/TextBoxes 基于SSD改进的文本检测算法,textBoxes_note记录了之前整理的笔记。

TextBoxes: A Fast Text Detector with a Single Deep Neural Network Introduction This paper presents an end-to-end trainable fast scene text detector, n

zhangjing1 24 Apr 28, 2022
ScanTailor Advanced is the version that merges the features of the ScanTailor Featured and ScanTailor Enhanced versions, brings new ones and fixes.

ScanTailor Advanced The ScanTailor version that merges the features of the ScanTailor Featured and ScanTailor Enhanced versions, brings new ones and f

952 Dec 31, 2022
A collection of resources (including the papers and datasets) of OCR (Optical Character Recognition).

OCR Resources This repository contains a collection of resources (including the papers and datasets) of OCR (Optical Character Recognition). Contents

Zuming Huang 363 Jan 03, 2023
A selectional auto-encoder approach for document image binarization

The code of this repository was used for the following publication. If you find this code useful please cite our paper: @article{Gallego2019, title =

Javier Gallego 89 Nov 18, 2022
A Python wrapper for Google Tesseract

Python Tesseract Python-tesseract is an optical character recognition (OCR) tool for python. That is, it will recognize and "read" the text embedded i

Matthias A Lee 4.6k Jan 06, 2023
document image degradation

ocrodeg The ocrodeg package is a small Python library implementing document image degradation for data augmentation for handwriting recognition and OC

NVIDIA Research Projects 134 Nov 18, 2022
Face Detection with DLIB

Face Detection with DLIB In this project, we have detected our face with dlib and opencv libraries. Setup This Project Install DLIB & OpenCV You can i

Can 2 Jan 16, 2022
Developed an AI-based system to control the mouse cursor using Python and OpenCV with the real-time camera.

Developed an AI-based system to control the mouse cursor using Python and OpenCV with the real-time camera. Fingertip location is mapped to RGB images to control the mouse cursor.

Ravi Sharma 71 Dec 20, 2022
Responsive Doc. scanner using U^2-Net, Textcleaner and Tesseract

Responsive Doc. scanner using U^2-Net, Textcleaner and Tesseract Toolset U^2-Net is used for background removal Textcleaner is used for image cleaning

3 Jul 13, 2022