A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Overview

Panoptic Mapping

This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based approach that leverages panoptic scene understanding towards adaptive spatio-temporally consistent volumetric mapping, as well as regular, monolithic semantic mapping.

combined

Multi-resolution 3D Reconstruction, active and inactive panoptic submaps for temporal consistency, online change detection, and more.

Table of Contents

Credits

Setup

Examples

Other

Paper

If you find this package useful for your research, please consider citing our paper:

  • Lukas Schmid, Jeffrey Delmerico, Johannes Schönberger, Juan Nieto, Marc Pollefeys, Roland Siegwart, and Cesar Cadena. "Panoptic Multi-TSDFs: a Flexible Representation for Online Multi-resolution Volumetric Mapping and Long-term Dynamic Scene Consistency" arXiv preprint arXiv:2109.10165 (2021). [ArXiv]
    @ARTICLE{schmid2021panoptic,
      title={Panoptic Multi-TSDFs: a Flexible Representation for Online Multi-resolution Volumetric Mapping and Long-term Dynamic Scene Consistency},
      author={Schmid, Lukas and Delmerico, Jeffrey and Sch{\"o}nberger, Johannes and Nieto, Juan and Pollefeys, Marc and Siegwart, Roland and Cadena, Cesar},
      journal={arXiv preprint arXiv:2109.10165},
      year={2021}
    }

Video

A short video overview explaining the approach will be released upon publication.

Installation

Installation instructions for Linux. The repository was developed on Ubuntu 18.04 with ROS melodic and also tested on Ubuntu 20.04 with ROS noetic.

Prerequisites

  1. If not already done so, install ROS (Desktop-Full is recommended).

  2. If not already done so, create a catkin workspace with catkin tools:

    # Create a new workspace
    sudo apt-get install python-catkin-tools
    mkdir -p ~/catkin_ws/src
    cd ~/catkin_ws
    catkin init
    catkin config --extend /opt/ros/$ROS_DISTRO
    catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo
    catkin config --merge-devel

Installation

  1. Install system dependencies:

    sudo apt-get install python-wstool python-catkin-tools
  2. Move to your catkin workspace:

    cd ~/catkin_ws/src
  3. Download repo using SSH:

    git clone [email protected]:ethz-asl/panoptic_mapping.git
  4. Download and install package dependencies using ros install:

    • If you created a new workspace.
    wstool init . ./panoptic_mapping/panoptic_mapping.rosinstall
    wstool update
    • If you use an existing workspace. Notice that some dependencies require specific branches that will be checked out.
    wstool merge -t . ./panoptic_mapping/panoptic_mapping.rosinstall
    wstool update
  5. Compile and source:

    catkin build panoptic_mapping_utils
    source ../devel/setup.bash

Datasets

The datasets described in the paper and used for the demo can be downloaded from the ASL Datasets.

To a utility script is provided to directly download the data:

roscd panoptic_mapping_utils
export FLAT_DATA_DIR="/home/$USER/Documents"  # Or whichever path you prefer.
chmod +x panoptic_mapping_utils/scripts/download_flat_dataset.sh
./panoptic_mapping_utils/scripts/download_flat_dataset.sh

Additional data to run the mapper on the 3RScan dataset will follow.

Examples

Running the Panoptic Mapper

This example explains how to run the Panoptic Multi-TSDF mapper on the flat dataset.

  1. First, download the flat dataset:

    export FLAT_DATA_DIR="/home/$USER/Documents"  # Or whichever path you prefer.
    chmod +x panoptic_mapping_utils/scripts/download_flat_dataset.sh
    ./panoptic_mapping_utils/scripts/download_flat_dataset.sh
    
  2. Replace the data base_path in launch/run.launch (L10) and file_name in config/mapper/flat_groundtruth.yaml (L15) to the downloaded path.

  3. Run the mapper:

    roslaunch panoptic_mapping_ros run.launch
    
  4. You should now see the map being incrementally built:

  5. After the map finished building, you can save the map:

    rosservice call /panoptic_mapper/save_map "file_path: '/path/to/run1.panmap'" 
    
  6. Terminate the mapper pressing Ctrl+C. You can continue the experiment on run2 of the flat dataset by changing the base_path-ending in launch/run.launch (L10) to run2, and load_map and load_path in launch/run.launch (L26-27) to true and /path/to/run1.panmap, respectively. Optionally, you can also change the color_mode in config/mapper/flat_groundtruth.yaml (L118) to change to better highlight the change detection at work.

    roslaunch panoptic_mapping_ros run.launch
    
  7. You should now see the map being updated based on the first run:

Monolithic Semantic Mapping

This example will follow shortly.

Running the RIO Dataset

This example will follow shortly.

Contributing

panoptic_mapping is an open-source project, any contributions are welcome!

For issues, bugs, or suggestions, please open a GitHub Issue.

To add to this repository:

  • Please employ the feature-branch workflow.
  • Setup our auto-formatter for coherent style (we follow the google style guide):
    # Download the linter
    cd <linter_dest>
    git clone [email protected]:ethz-asl/linter.git
    cd linter
    echo ". $(realpath setup_linter.sh)" >> ~/.bashrc
    bash
    roscd panoptic_mapping/..
    init_linter_git_hooks
    # You're all set to go!
    
  • Please open a Pull Request for your changes.
  • Thank you for contributing!
Owner
ETHZ ASL
ETHZ ASL
Decompose to Adapt: Cross-domain Object Detection via Feature Disentanglement

Decompose to Adapt: Cross-domain Object Detection via Feature Disentanglement In this project, we proposed a Domain Disentanglement Faster-RCNN (DDF)

19 Nov 24, 2022
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

260 Nov 28, 2022
Official Implementation of SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations

Official Implementation of SimIPU SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations Since

Zhyever 37 Dec 01, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
5 Jan 05, 2023
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
Collection of machine learning related notebooks to share.

ML_Notebooks Collection of machine learning related notebooks to share. Notebooks GAN_distributed_training.ipynb In this Notebook, TensorFlow's tutori

Sascha Kirch 14 Dec 22, 2022
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

北海若 48 Nov 14, 2022
A complete, self-contained example for training ImageNet at state-of-the-art speed with FFCV

ffcv ImageNet Training A minimal, single-file PyTorch ImageNet training script designed for hackability. Run train_imagenet.py to get... ...high accur

FFCV 92 Dec 31, 2022
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.

VCN: Volumetric correspondence networks for optical flow [project website] Requirements python 3.6 pytorch 1.1.0-1.3.0 pytorch correlation module (opt

Gengshan Yang 144 Dec 06, 2022
A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks

SVHNClassifier-PyTorch A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks If

Potter Hsu 182 Jan 03, 2023
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion" Coming soon, as soon as I finish a

Ziyao Zeng 14 Feb 26, 2022
Repository for MDPGT

MD-PGT Repository for implementing and reproducing the results for the paper MDPGT: Momentum-based Decentralized Policy Gradient Tracking. Available E

Xian Yeow Lee 2 Dec 30, 2021
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

81 Dec 12, 2022
Contains source code for the winning solution of the xView3 challenge

Winning Solution for xView3 Challenge This repository contains source code and pretrained models for my (Eugene Khvedchenya) solution to xView 3 Chall

Eugene Khvedchenya 51 Dec 30, 2022
Caffe implementation for Hu et al. Segmentation for Natural Language Expressions

Segmentation from Natural Language Expressions This repository contains the Caffe reimplementation of the following paper: R. Hu, M. Rohrbach, T. Darr

10 Jul 27, 2021
Official code for MPG2: Multi-attribute Pizza Generator: Cross-domain Attribute Control with Conditional StyleGAN

This is the official code for Multi-attribute Pizza Generator (MPG2): Cross-domain Attribute Control with Conditional StyleGAN. Paper Demo Setup Envir

Fangda Han 5 Sep 01, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Enrico Fini 73 Dec 18, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022