COCO Style Dataset Generator GUI

Overview

COCO-Style-Dataset-Generator-GUI

This is a simple GUI-based Widget based on matplotlib in Python to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes using a simple interactive User Interface. Annotation can be in terms of polygon points covering all parts of an object (see instructions in README) or it can simply be a bounding box, for which you click and drag the mouse button. Optionally, one could choose to use a pretrained Mask RCNN model to come up with initial segmentations. This shifts the work load from painstakingly annotating all the objects in every image to altering wrong predictions made by the system which maybe simpler once an efficient model is learnt.

Note: This repo only contains code to annotate every object using a single polygon figure. Support for multi-polygon objects and iscrowd=True annotations isn't available yet. Feel free to extend the repo as you wish. Also, the code uses xyxy bounding boxes while coco uses xywh; something to keep in mind if you intend to create a custom COCO dataset to plug into other models as COCO datasets.

REQUIREMENTS:

Python 3.5+ is required to run the Mask RCNN code. If only the GUI tool is used, Python2.7 or Python3.5+ can be used.

NOTE: For python2.7, OpenCV needs to be installed from source and configured to be in the environment running the code.
Before installing, please upgrade setuptools using: pip install --upgrade setuptools
For Windows users, please install Visual Studio C++ 14 or higher if necessary using this link: http://go.microsoft.com/fwlink/?LinkId=691126&fixForIE=.exe.

RUN THE SEGMENTOR GUI:

Clone the repo.

git clone https://github.com/hanskrupakar/COCO-Style-Dataset-Generator-GUI.git

Installing Dependencies:

Before running the code, install required pre-requisite python packages using pip.

If you wish to use Mask RCNN to prelabel based on a trained model, please use the environment variable MASK_RCNN="y", otherwise there's no need to include it and you could just perform the install.

Without Mask RCNN
cd COCO-Style-Dataset-Generator-GUI/
python setup.py install
With Mask RCNN
cd COCO-Style-Dataset-Generator-GUI/
MASK_RCNN="y" python3 setup.py install

Running the instance segmentation GUI without Mask RCNN pretrained predictions:

In a separate text file, list the target labels/classes line-by-line to be displayed along with the dataset for class labels. For example, look at classes/products.txt

python3 -m coco_dataset_generator.gui.segment -i background/ -c classes/products.txt

python3 -m coco_dataset_generator.gui.segment_bbox_only -i background/ -c classes/products.txt

Running the instance segmentation GUI augmented by initial Mask RCNN pretrained model predictions:

To run the particular model for the demo, download the pretrained weights from HERE!!!. Download and extract pretrained weights into the repository.

python3 -m coco_dataset_generator.gui.segment -i background/ -c classes/products.txt \
                                              -w 
   
     [--config 
    
     ]

python3 -m coco_dataset_generator.gui.segment_bbox_only -i background/ -c classes/products.txt \
                                              -w 
     
       [--config 
      
       ]

      
     
    
   

The configuration file for Mask RCNN becomes relevant when you play around with the configuration parameters that make up the network. In order to seamlessly use the repository with multiple such Mask RCNN models for different types of datasets, you could create a single config file for every project and use them as you please. The base repository has been configured to work well with the demo model provided and so any change to the parameters should be followed by generation of its corresponding config file.

HINT: Use get_json_config.py inside Mask RCNN to get config file wrt specific parameters of Mask RCNN. You could either clone Mask_RCNN, use pip install -e Mask_RCNN/ to replace the mask_rcnn installed from this repo and then get access to get_json_config.py easily or you could find where pip installs mask_rcnn and find it directly from the source.

USAGE: segment.py [-h] -i IMAGE_DIR -c CLASS_FILE [-w WEIGHTS_PATH] [-x CONFIG_PATH]

USAGE: segment_bbox_only.py [-h] -i IMAGE_FILE -c CLASSES_FILE [-j JSON_FILE] [--save_csv] [-w WEIGHTS_PATH] [-x CONFIG_PATH]

Optional Arguments
Shorthand Flag Name Description
-h --help Show this help message and exit
-i IMAGE_DIR --image_dir IMAGE_DIR Path to the image dir
-c CLASS_FILE --class_file CLASS_FILE Path to object labels
-w WEIGHTS_PATH --weights_path WEIGHTS_PATH Path to Mask RCNN checkpoint save file
-j JSON_FILE --json_file JSON_FILE Path of JSON file to append dataset to
--save_csv Choose option to save dataset as CSV file
-x CONFIG_FILE --config_file CONFIG_FILE Path of JSON file for training config; Use get_json_config script from Mask RCNN

POLYGON SEGMENTATION GUI CONTROLS:

deepmagic

In this demo, all the green patches over the objects are the rough masks generated by a pretrained Mask RCNN network.

Key-bindings/ Buttons

EDIT MODE (when a is pressed and polygon is being edited)

  'a'       toggle vertex markers on and off.
            When vertex markers are on, you can move them, delete them

  'd'       delete the vertex under point

  'i'       insert a vertex at point near the boundary of the polygon.

Left click  Use on any point on the polygon boundary and move around
            by dragging to alter shape of polygon

REGULAR MODE

Scroll Up       Zoom into image

Scroll Down     Zoom out of image

Left Click      Create a point for a polygon mask around an object

Right Click     Complete the polygon currently formed by connecting all selected points

Left Click Drag Create a bounding box rectangle from point 1 to point 2 (works only
                when there are no polygon points on screen for particular object)

  'a'           Press key on top of overlayed polygon (from Mask RCNN or
                previous annotations) to select it for editing

  'r'           Press key on top of overlayed polygon (from Mask RCNN or
                previous annotations) to completely remove it

BRING PREVIOUS ANNOTATIONS  Bring back the annotations from the previous image to preserve
                            similar annotations.

SUBMIT                      To be clicked after Right click completes polygon! Finalizes current
                            segmentation mask and class label picked.
                            After this, the polygon cannot be edited.

NEXT                        Save all annotations created for current file and move on to next image.

PREV                        Goto previous image to re-annotate it. This deletes the annotations
                            created for the file before the current one in order to
                            rewrite the fresh annotations.

RESET                       If when drawing the polygon using points, the polygon doesn't cover the
                            object properly, reset will let you start fresh with the current polygon.
                            This deletes all the points on the image.

The green annotation boxes from the network can be edited by pressing on the Keyboard key a when the mouse pointer is on top of a particular such mask. Once you press a, the points making up that polygon will show up and you can then edit it using the key bindings specified. Once you're done editing the polygon, press a again to finalize the edits. At this point, it will become possible to submit that particular annotation and move on to the next one.

Once the GUI tool has been used successfully and relevant txt files have been created for all annotated images, one can use create_json_file.py to create the COCO-Style JSON file.

python -m coco_dataset_generator.utils.create_json_file -i background/ -c classes/products.txt
                                        -o output.json -t jpg
USAGE: create_json_file.py [-h] -i IMAGE_DIR -o FILE_PATH -c CLASS_FILE -t TYPE
Optional Arguments
Shorthand Flag Name Description
-i IMAGE_DIR --image_dir IMAGE_DIR Path to the image dir
-o FILE_PATH --file_path FILE_PATH Path of output file
-c CLASS_FILE --class_file CLASS_FILE Path of file with output classes
-t TYPE --type TYPE Type of the image files (jpg, png etc.)

RECTANGULAR BOUNDING BOX GUI CONTROLS:

The same GUI is designed slightly differently in case of rectangular bounding box annotations with speed of annotation in mind. Thus, most keys are keyboard bindings. Most ideally, this interface is very suited to serve to track objects across video by dragging around a box of similar size. Since the save button saves multiple frame results together, the JSON file is directly created instead of txt files for each image, which means there wouldn't be a need to use create_json_file.py.

Key-bindings/ Buttons

EDIT MODE (when a is pressed and rectangle is being edited)

  'a'       toggle vertex markers on and off.  When vertex markers are on,
            you can move them, delete them

  'i'       insert rectangle in the list of final objects to save.

Left click  Use on any point on the rectangle boundary and move around by
            dragging to alter shape of rectangle

REGULAR MODE

Scroll Up       Zoom into image

Scroll Down     Zoom out of image

Left Click Drag Create a bounding box rectangle from point 1 to point 2.

  'a'           Press key on top of overlayed polygon (from Mask RCNN or
                previous annotations) to select it for editing

  'r'           Press key on top of overlayed polygon (from Mask RCNN or
                previous annotations) to completely remove it

  'n'           Press key to move on to next image after completing all
                rectangles in current image

  SAVE          Save all annotated objects so far

LIST OF FUNCTIONALITIES:

    FILE                            FUNCTIONALITY

cut_objects.py                  Cuts objects based on bounding box annotations using dataset.json
                                file and creates occlusion-based augmented images dataset.

create_json_file.py             Takes a directory of annotated images (use segment.py to annotate
                                into text files) and returns a COCO-style JSON file.

extract_frames.py               Takes a directory of videos and extracts all the frames of all
                                videos into a folder labeled adequately by the video name.

pascal_to_coco.py               Takes a PASCAL-style dataset directory with JPEGImages/ and
                                Annotations/ folders and uses the bounding box as masks to
                                create a COCO-style JSON file.

segment.py                      Read the instructions above.

segment_bbox_only.py            Same functionality but optimized for easier annotation of
                                bbox-only datasets.

test_*.py                       Unit tests.

visualize_dataset.py            Visualize the annotations created using the tool.

visualize_json_file.py          Visualize the dataset JSON file annotations on the entire dataset.

compute_dataset_statistics.py   Find distribution of objects in the dataset by counts.

combine_json_files.py           Combine different JSON files together into a single dataset file.

delete_images.py                Delete necessary images from the JSON dataset.

NOTE: Please use python .py -h for details on how to use each of the above files.

Owner
Hans Krupakar
Data Science | Computer Vision
Hans Krupakar
Official PyTorch Implementation for InfoSwap: Information Bottleneck Disentanglement for Identity Swapping

InfoSwap: Information Bottleneck Disentanglement for Identity Swapping Code usage Please check out the user manual page. Paper Gege Gao, Huaibo Huang,

Grace Hešeri 56 Dec 20, 2022
Fair Recommendation in Two-Sided Platforms

Fair Recommendation in Two-Sided Platforms

gourabgggg 1 Nov 10, 2021
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 09, 2023
YOLO-v5 기반 단안 카메라의 영상을 활용해 차간 거리를 일정하게 유지하며 주행하는 Adaptive Cruise Control 기능 구현

자율 주행차의 영상 기반 차간거리 유지 개발 Table of Contents 프로젝트 소개 주요 기능 시스템 구조 디렉토리 구조 결과 실행 방법 참조 팀원 프로젝트 소개 YOLO-v5 기반으로 단안 카메라의 영상을 활용해 차간 거리를 일정하게 유지하며 주행하는 Adap

14 Jun 29, 2022
ppo_pytorch_cpp - an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch

PPO Pytorch C++ This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment t

Martin Huber 59 Dec 09, 2022
Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals"

The Temporal Robustness of Stochastic Signals Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals" Case stud

0 Oct 28, 2021
Fully Convolutional DenseNet (A.K.A 100 layer tiramisu) for semantic segmentation of images implemented in TensorFlow.

FC-DenseNet-Tensorflow This is a re-implementation of the 100 layer tiramisu, technically a fully convolutional DenseNet, in TensorFlow (Tiramisu). Th

Hasnain Raza 121 Oct 12, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
Implementation of Google Brain's WaveGrad high-fidelity vocoder

WaveGrad Implementation (PyTorch) of Google Brain's high-fidelity WaveGrad vocoder (paper). First implementation on GitHub with high-quality generatio

Ivan Vovk 363 Dec 27, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
YOLOPのPythonでのONNX推論サンプル

YOLOP-ONNX-Video-Inference-Sample YOLOPのPythonでのONNX推論サンプルです。 ONNXモデルは、hustvl/YOLOP/weights を使用しています。 Requirement OpenCV 3.4.2 or later onnxruntime 1.

KazuhitoTakahashi 8 Sep 05, 2022
Riemannian Convex Potential Maps

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited b

Facebook Research 61 Nov 28, 2022
PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "

Foley Music: Learning to Generate Music from Videos This repo holds the code for the framework presented on ECCV 2020. Foley Music: Learning to Genera

Chuang Gan 30 Nov 03, 2022
HairCLIP: Design Your Hair by Text and Reference Image

Overview This repository hosts the official PyTorch implementation of the paper: "HairCLIP: Design Your Hair by Text and Reference Image". Our single

322 Jan 06, 2023
Recommendationsystem - Movie-recommendation - matrixfactorization colloborative filtering recommendation system user

recommendationsystem matrixfactorization colloborative filtering recommendation

kunal jagdish madavi 1 Jan 01, 2022
Official repository of "DeepMIH: Deep Invertible Network for Multiple Image Hiding", TPAMI 2022.

DeepMIH: Deep Invertible Network for Multiple Image Hiding (TPAMI 2022) This repo is the official code for DeepMIH: Deep Invertible Network for Multip

Junpeng Jing 67 Nov 22, 2022
RobustVideoMatting and background composing in one model by using onnxruntime.

RVM_onnx_compose RobustVideoMatting and background composing in one model by using onnxruntime. Usage pip install -r requirements.txt python infer_cam

Quantum Liu 4 Apr 07, 2022
MoViNets PyTorch implementation: Mobile Video Networks for Efficient Video Recognition;

MoViNet-pytorch Pytorch unofficial implementation of MoViNets: Mobile Video Networks for Efficient Video Recognition. Authors: Dan Kondratyuk, Liangzh

189 Dec 20, 2022
In the AI for TSP competition we try to solve optimization problems using machine learning.

AI for TSP Competition Goal In the AI for TSP competition we try to solve optimization problems using machine learning. The competition will be hosted

Paulo da Costa 11 Nov 27, 2022