A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Overview

Semantic Meshes

A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Build License: MIT

Paper

If you find this framework useful in your research, please consider citing: [arxiv]

@misc{fervers2021improving,
      title={Improving Semantic Image Segmentation via Label Fusion in Semantically Textured Meshes},
      author={Florian Fervers, Timo Breuer, Gregor Stachowiak, Sebastian Bullinger, Christoph Bodensteiner, Michael Arens},
      year={2021},
      eprint={2111.11103},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Workflow

  1. Reconstruct a mesh of your scene from a set of images (e.g. using Colmap).
  2. Send all undistorted images through your segmentation model (e.g. from tfcv or image-segmentation-keras) to produce 2D semantic annotation images.
  3. Project all 2D annotations into the 3D mesh and fuse conflicting predictions.
  4. Render the annotated mesh from original camera poses to produce new 2D consistent annotation images, or save it as a colorized ply file.

Example output for a traffic scene with annotations produced by a model that was trained on Cityscapes:

view1 view2

Usage

We provide a python interface that enables easy integration with numpy and machine learning frameworks like Tensorflow. A full example script is provided in colorize_cityscapes_mesh.py that annotates a mesh using a segmentation model that was pretrained on Cityscapes. The model is downloaded automatically and the prediction peformed on-the-fly.

import semantic_meshes

...

# Load a mesh from ply file
mesh = semantic_meshes.data.Ply(args.input_ply)
# Instantiate a triangle renderer for the mesh
renderer = semantic_meshes.render.triangles(mesh)
# Load colmap workspace for camera poses
colmap_workspace = semantic_meshes.data.Colmap(args.colmap)
# Instantiate an aggregator for aggregating the 2D input annotations per 3D primitive
aggregator = semantic_meshes.fusion.MeshAggregator(primitives=renderer.getPrimitivesNum(), classes=19)

...

# Process all input images
for image_file in image_files:
    # Load image from file
    image = imageio.imread(image_file)
    ...
    # Predict class probability distributions for all pixels in the input image
    prediction = predictor(image)
    ...
    # Render the mesh from the pose of the given image
    # This returns an image that contains the index of the projected mesh primitive per pixel
    primitive_indices, _ = renderer.render(colmap_workspace.getCamera(image_file))
    ...
    # Aggregate the class probability distributions of all pixels per primitive
    aggregator.add(primitive_indices, prediction)

# After all images have been processed, the mesh contains a consistent semantic representation of the environment
aggregator.get() # Returns an array that contains the class probability distribution for each primitive

...

# Save colorized mesh to ply
mesh.save(args.output_ply, primitive_colors)

Docker

If you want to skip installation and jump right in, we provide a docker file that can be used without any further steps. Otherwise, see Installation.

  1. Install docker and gpu support
  2. Build the docker image: docker build -t semantic-meshes https://github.com/fferflo/semantic-meshes.git#master
    • If your system is using a proxy, add: --build-arg HTTP_PROXY=... --build-arg HTTPS_PROXY=...
  3. Open a command prompt in the docker image and mount a folder from your host system (HOST_PATH) that contains your colmap workspace into the docker image (DOCKER_PATH): docker run -v /HOST_PATH:/DOCKER_PATH --gpus all -it semantic-meshes bash
  4. Run the provided example script inside the docker image to annotate the mesh with Cityscapes annotations: colorize_cityscapes_mesh.py --colmap /DOCKER_PATH/colmap/dense/sparse --input_ply /DOCKER_PATH/colmap/dense/meshed-delaunay.ply --images /DOCKER_PATH/colmap/dense/images --output_ply /DOCKER_PATH/colorized_mesh.ply

Running the repository inside a docker image is significantly slower than running it in the host system (12sec/image vs 2sec/image on RTX 6000).

Installation

Dependencies

  • CUDA: https://developer.nvidia.com/cuda-downloads
  • OpenMP: On Ubuntu: sudo apt install libomp-dev
  • Python 3
  • Boost: Requires the python and numpy components of the Boost library, which have to be compiled for the python version that you are using. If you're lucky, your OS ships compatible Boost and Python3 versions. Otherwise, compile boost from source and make sure to include the --with-python=python3 switch.

Build

The repository contains CMake code that builds the project and provides a python package in the build folder that can be installed using pip.

CMake downloads, builds and installs all other dependencies automatically. If you don't want to clutter your global system directories, add -DCMAKE_INSTALL_PREFIX=... to install to a local directory.

The framework has to be compiled for specific number of classes (e.g. 19 for Cityscapes, or 2 for a binary segmentation). Add a semicolon-separated list with -DCLASSES_NUMS=2;19;... for all number of classes that you want to use. A longer list will significantly increase the compilation time.

An example build:

git clone https://github.com/fferflo/semantic-meshes
cd semantic-meshes
mkdir build
mkdir install
cd build
cmake -DCMAKE_INSTALL_PREFIX=../install -DCLASSES_NUMS=19 ..
make -j8
make install # Installs to the local install directory
pip install ./python

Build with incompatible Boost or Python versions

Alternatively, in case your OS versions of Boost or Python do not match the version requirements of semantic-meshes, we provide an installation script that also fetches and locally installs compatible versions of these dependencies: install.sh. Since the script builds python from source, make sure to first install all optional Python dependencies that you require (see e.g. https://github.com/python/cpython/blob/main/.github/workflows/posix-deps-apt.sh).

Owner
Florian
Florian
A system for quickly generating training data with weak supervision

Programmatically Build and Manage Training Data Announcement The Snorkel team is now focusing their efforts on Snorkel Flow, an end-to-end AI applicat

Snorkel Team 5.4k Jan 02, 2023
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering Abstract: We present a novel point-based, differentiable neural rendering pipeline for scen

Darius Rückert 1.9k Jan 06, 2023
IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling

IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling This is my code, data and approach for the IEEE-CIS Technical Challen

3 Sep 18, 2022
Repo for our ICML21 paper Unsupervised Learning of Visual 3D Keypoints for Control

Unsupervised Learning of Visual 3D Keypoints for Control [Project Website] [Paper] Boyuan Chen1, Pieter Abbeel1, Deepak Pathak2 1UC Berkeley 2Carnegie

Boyuan Chen 34 Jul 22, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

17 Dec 19, 2022
Face and Pose detector that emits MQTT events when a face or human body is detected and not detected.

Face Detect MQTT Face or Pose detector that emits MQTT events when a face or human body is detected and not detected. I built this as an alternative t

Jacob Morris 38 Oct 21, 2022
Implementation for our AAAI2021 paper (Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction).

SSAN Introduction This is the pytorch implementation of the SSAN model (see our AAAI2021 paper: Entity Structure Within and Throughout: Modeling Menti

benfeng 69 Nov 15, 2022
A PyTorch library for Vision Transformers

VFormer A PyTorch library for Vision Transformers Getting Started Read the contributing guidelines in CONTRIBUTING.rst to learn how to start contribut

Society for Artificial Intelligence and Deep Learning 142 Nov 28, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Dec 28, 2022
The official repo of the CVPR2021 oral paper: Representative Batch Normalization with Feature Calibration

Representative Batch Normalization (RBN) with Feature Calibration The official implementation of the CVPR2021 oral paper: Representative Batch Normali

Open source projects of ShangHua-Gao 76 Nov 09, 2022
Graph WaveNet apdapted for brain connectivity analysis.

Graph WaveNet for brain network analysis This is the implementation of the Graph WaveNet model used in our manuscript: S. Wein , A. Schüller, A. M. To

4 Dec 17, 2022
Algorithmic trading using machine learning.

Algorithmic Trading This machine learning algorithm was built using Python 3 and scikit-learn with a Decision Tree Classifier. The program gathers sto

Sourav Biswas 101 Nov 10, 2022
Equivariant Imaging: Learning Beyond the Range Space

Equivariant Imaging: Learning Beyond the Range Space Equivariant Imaging: Learning Beyond the Range Space Dongdong Chen, Julián Tachella, Mike E. Davi

Dongdong Chen 46 Jan 01, 2023
Code implementation from my Medium blog post: [Transformers from Scratch in PyTorch]

transformer-from-scratch Code for my Medium blog post: Transformers from Scratch in PyTorch Note: This Transformer code does not include masked attent

Frank Odom 27 Dec 21, 2022
Object classification with basic computer vision techniques

naive-image-classification Object classification with basic computer vision techniques. Final assignment for the computer vision course I took at univ

2 Jul 01, 2022
DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021) input image, aligned reconstruction, animation with various poses & expressions This is

Yao Feng 1.5k Jan 02, 2023
Single Red Blood Cell Hydrodynamic Traps Via the Generative Design

Rbc-traps-generative-design - The generative design for single red clood cell hydrodynamic traps using GEFEST framework

Natural Systems Simulation Lab 4 Jun 16, 2022
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
Denoising Normalizing Flow

Denoising Normalizing Flow Christian Horvat and Jean-Pascal Pfister 2021 We combine Normalizing Flows (NFs) and Denoising Auto Encoder (DAE) by introd

CHrvt 17 Oct 15, 2022