Honours project, on creating a depth estimation map from two stereo images of featureless regions

Overview

image-processing

This module generates depth maps for shape-blocked-out images

Install

If working with anaconda, then from the root directory:

conda env create --file environment.yml
conda activate image-processing

Otherwise, if python 3 is installed, pip can be used to ensure the required packages are available. From the root directory, run

pip install -r requirements.txt

Files

The core functional files are collection.py, image.py, shape.py, edge.py, segment.py. They each contain a class of the same name. They logically follow this order and encapsulate each other, so collection creates three image objects for the left, center and right images. Each image object creates a number of shape objects. Shape objects create edge objects. Edge objects create segment objects. helper.py contains assisting functions used by these various classes.

This design aids in splitting up all the information and processes necessary to perform the desired function and logically groups it to ease comprehension. Each ought to be well-commented enough to generally understand what each part is doing.

The only one intended to be accessed to retrieve depth maps is collection.py as it orchestrates the entire process.

Usage

Both main.py and auto_gen.py are designed to access collection and to have it create depth maps. They require the initial images to be stored within a directory in assets/ , and each with three further subdirectories, cameraLeft/, cameraCenter/, and cameraRight/ . They save their results to saves/ with the generated images being stored in saves/generated/ . All .img files are object-files generated during this process to reduce the workload needed the next time the same process is executed.

Main

main.py is for individual depth map generation. There are four arguments able to be passed to specify details to the execution.

  1. The directory name desired from within assets/ .
  2. The numerical index (starting at 0) of the specific image desired within the innermost subdirectories
  3. The number representing which image should the depth map visual be based on (0 for left, 1 for center, 2 for right)
  4. Should the resulting depth image be saved
  5. Should the resulting depth image be displayed

While it can take up to these four arguments, no arguments is also possible. Then, the directory within assets/ is randomly selected, as is the index of the image set, and which image is used to generate the depth map visual. It will save and display the results. Partial arguments is also fine, so long as order is maintained.

Example: To display on the left image but not save occluded_road's first image set

python main.py occluded_road 0 0 False True

note: the last argument, True, is redundant in this case
 
Example: Any road_no_occlusion image set, any image used to create the depth map visual (automatically will save and display)

python main.py road_no_occlusion

 

Example: Anything (automatically will save and display)

python main.py

 

The value of having it execute a certain image when its depth image has already been generated is that it will quickly pull it up in the viewer and unlike the static image one can view the individual pixel values the mouse hovers over in the top-right corner.

 

Auto_gen

Alternatively, auto_gen.py is intended for the automated creation of all depth map images.

python auto_gen.py

By simply executing it, it will determine the depth map image all image sets and save them all. The terminal output is saved to a txt file stored in saves/logs. It does not display the results, as that would greatly heed the process of creating all of the results.

Alternatively, it can take two arguments.

  1. Specifies a directory within assets/ to use rather than executing for all of them, similar to the first argument for main.py
  2. Specifies the image to be used as the basis for the depth map visual, similar to the third argument for main.py (0-2 for left, center, and right)

Example: All depth images

python auto_gen.py

 
Example: All Shape_based_stereoPairs depth images using the right image

python auto_gen.py Shape_based_stereoPairs 2

 

For both, if an existing depth map exists, it will not be redone even if the image expected to be used is different. To do so, remove both the .jpg and .img and re-run.

How it works

 

Initialization

Upon creation of an instance of collection, it first intantiates the left image's Image intance. The shape colours are determined and then each shape is instantiated. The bounding box of the given shape is determined as well as its left and right edges, and their segments.

Collection uses the colours determined by the left Image to speed up the other two image's instantiations.

After everything has been created, the segments of each edge, of each shape, in each image must be assigned. First this process requires determining the displacement of edges, which is then used to determine which shape owns and doesn't own which segment.

Generally at this stage all but a few stragglers are assigned. The remaining are due to shapes having few edges, and the only one it could own is shared with the ground or sky shape, and thus difficult to tell which owns it. Using additional information about the shapes ownership is assigned. Finally, it checks to see if any shapes are the ground or sky, as their depths are not calculated.

At this stage, the image objects are saved.

Depth calculation

Then, using this information about the edges of a shape, its depth can be more accurately calculated. Only edges it owns are used to determine its depth. So if it only has its right side, only the right edge is used. Alternatively if both are owned, the midpoint is used.

However, if the shape is determined to have a varying depth, then its depth can alternatively be calculated using the change of slope between the images.

Finally, once all depth values are found, a modified version of the original image is created with its shape colours replaced with their determined depth values, the sky is replaced with pure black, and the ground with pure white. This image is then possibly saved and possibly displayed. Which image is used to re-colour for the depth map depends on either a given argument or random selection.

[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Feel free to visit my homepage Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper] Presentation

Seokeon Choi 35 Oct 26, 2022
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021
Composing methods for ML training efficiency

MosaicML Composer contains a library of methods, and ways to compose them together for more efficient ML training.

MosaicML 2.8k Jan 08, 2023
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
Gated-Shape CNN for Semantic Segmentation (ICCV 2019)

GSCNN This is the official code for: Gated-SCNN: Gated Shape CNNs for Semantic Segmentation Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

859 Dec 26, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
Receptive Field Block Net for Accurate and Fast Object Detection, ECCV 2018

Receptive Field Block Net for Accurate and Fast Object Detection By Songtao Liu, Di Huang, Yunhong Wang Updatas (2021/07/23): YOLOX is here!, stronger

Liu Songtao 1.4k Dec 21, 2022
This repository provides an efficient PyTorch-based library for training deep models.

s3sec Test AWS S3 buckets for read/write/delete access This tool was developed to quickly test a list of s3 buckets for public read, write and delete

Bytedance Inc. 123 Jan 05, 2023
A tiny, friendly, strong baseline code for Person-reID (based on pytorch).

Pytorch ReID Strong, Small, Friendly A tiny, friendly, strong baseline code for Person-reID (based on pytorch). Strong. It is consistent with the new

Zhedong Zheng 3.5k Jan 08, 2023
Hypernetwork-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels

Hypernet-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels The implementation of Hypernet-Ensemble Le

Sungmin Hong 6 Jul 18, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition The official code of ABINet (CVPR 2021, Oral).

334 Dec 31, 2022
A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing"

A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf 2021). Abstract In this work we propose Pathfind

Benedek Rozemberczki 49 Dec 01, 2022
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

DV Lab 73 Sep 18, 2022
Pytorch tutorials for Neural Style transfert

PyTorch Tutorials This tutorial is no longer maintained. Please use the official version: https://pytorch.org/tutorials/advanced/neural_style_tutorial

Alexis David Jacq 135 Jun 26, 2022
Improving 3D Object Detection with Channel-wise Transformer

"Improving 3D Object Detection with Channel-wise Transformer" Thanks for the OpenPCDet, this implementation of the CT3D is mainly based on the pcdet v

Hualian Sheng 107 Dec 20, 2022
Wordle Env: A Daily Word Environment for Reinforcement Learning

Wordle Env: A Daily Word Environment for Reinforcement Learning Setup Steps: git pull [email&#

2 Mar 28, 2022
Fermi Problems: A New Reasoning Challenge for AI

Fermi Problems: A New Reasoning Challenge for AI Fermi Problems are questions whose answer is a number that can only be reasonably estimated as a prec

AI2 15 May 28, 2022
Official implementation of paper Gradient Matching for Domain Generalization

Gradient Matching for Domain Generalisation This is the official PyTorch implementation of Gradient Matching for Domain Generalisation. In our paper,

94 Dec 23, 2022
links and status of cool gradio demos

awesome-demos This is a list of some wonderful demos & applications built with Gradio. Here's how to contribute yours! 🖊️ Natural language processing

Gradio 96 Dec 30, 2022