Cross View SLAM

Overview

Cross View SLAM

This is the associated code and dataset repository for our paper

I. D. Miller et al., "Any Way You Look at It: Semantic Crossview Localization and Mapping With LiDAR," in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2397-2404, April 2021, doi: 10.1109/LRA.2021.3061332.

See also our accompanying video

XView demo video

Compilation

We release the localization portion of the system, which can be integrated with a LiDAR-based mapper of the user's choice. The system reqires ROS and should be built as a catkin package. We have tested with ROS Melodic and Ubuntu 18.04. Note that we require GCC 9 or greater as well as Intel TBB.

Datasets

Our datasets

We release our own datasets from around University City in Philadelphia and Morgantown, PA. They can be downloaded here. Ucity2 was taken several months after Ucity, and both follow the same path. These datasets are in rosbag format, including the following topics:

  • /lidar_rgb_calib/painted_pc is the semantically labelled motion-compensated pointcloud. Classes are encoded as a per-point color, with each channel equal to the class ID. Classes are based off of cityscapes and listed below.
  • /os1_cloud_node/imu is raw IMU data from the Ouster OS1-64.
  • /quad/front/image_color/compressed is a compressed RGB image from the forward-facing camera.
  • /subt/global_pose is the global pose estimate from UPSLAM.
  • /subt/integrated_pose is the integrated pose estimate from UPSLAM. This differs from the above in that it does not take into account loop closures, and is used as the motion prior for the localization filter.

Please note that UPSLAM odometry was generated purely based on LiDAR without semantics, and is provided to act as a loose motion prior. It should not be used as ground truth.

If you require access to the raw data for your work, please reach out directly at iandm (at) seas (dot) upenn (dot) edu.

KITTI

We provide a derivative of the excellent kitti2bag tool in the scripts directory, modified to use semantics from SemanticKITTI. To use this tool, you will need to download the raw synced + rectified data from KITTI as well as the SemanticKITTI data. Your final directory structure should look like

2011-09-30
  2011_09_30_drive_0033_sync  
    image_00
      timestamps.txt
      data
    image_01
      timestamps.txt
      data
    image_02
      timestamps.txt
      data
    image_03
      timestamps.txt
      data
    labels
      000000.label
      000001.label
      ...
    oxts
      dataformat.txt
      timestamps.txt
      data
    velodyne_points
      timestamps_end.txt  
      timestamps_start.txt
      timestamps.txt
      data
  calib_cam_to_cam.txt  
  calib_imu_to_velo.txt  
  calib_velo_to_cam.txt

You can then run ./kitti2bag.py -t 2011_09_30 -r 0033 raw_synced /path/to/kitti in order to generate a rosbag usable with our system.

Classes

Class Label
2 Building
7 Vegetation
13 Vehicle
100 Road/Parking Lot
102 Ground/Sidewalk
255 Unlabelled

Usage

We provide a launch file for KITTI and for our datasets. To run, simply launch the appropriate launch file and play the bag. Note that when data has been modified, the system will take several minutes to regenerate the processed map TDF. Once this has been done once, and parameters are not changed, it will be cached. The system startup should look along the lines of

[ INFO] [1616266360.083650372]: Found cache, checking if parameters have changed
[ INFO] [1616266360.084357050]: No cache found, loading raster map
[ INFO] [1616266360.489371763]: Computing distance maps...
[ INFO] [1616266360.489428570]: maps generated
[ INFO] [1616266360.597603324]: transforming coord
[ INFO] [1616266360.641200529]: coord rotated
[ INFO] [1616266360.724551466]: Sample grid generated
[ INFO] [1616266385.379985385]: class 0 complete
[ INFO] [1616266439.390797168]: class 1 complete
[ INFO] [1616266532.004976919]: class 2 complete
[ INFO] [1616266573.041695479]: class 3 complete
[ INFO] [1616266605.901935236]: class 4 complete
[ INFO] [1616266700.533124618]: class 5 complete
[ INFO] [1616266700.537600570]: Rasterization complete
[ INFO] [1616266700.633949062]: maps generated
[ INFO] [1616266700.633990791]: transforming coord
[ INFO] [1616266700.634004336]: coord rotated
[ INFO] [1616266700.634596830]: maps generated
[ INFO] [1616266700.634608101]: transforming coord
[ INFO] [1616266700.634618110]: coord rotated
[ INFO] [1616266700.634666000]: Initializing particles...
[ INFO] [1616266700.710166543]: Particles initialized
[ INFO] [1616266700.745398596]: Setup complete

ROS Topics

  • /cross_view_slam/gt_pose Input, takes in ground truth localization if provided to draw on the map. Not used.
  • /cross_view_slam/pc Input, the pointwise-labelled pointcloud
  • /cross_view_slam/motion_prior Input, the prior odometry (from some LiDAR odometry system)
  • /cross_view_slam/map Output image of map with particles
  • /cross_view_slam/scan Output image visualization of flattened polar LiDAR scan
  • /cross_view_slam/pose_est Estimated pose of the robot with uncertainty, not published until convergence
  • /cross_view_slam/scale Estimated scale of the map in px/m, not published until convergence

ROS Parameters

  • raster_res Resolution to rasterize the svg at. 1 is typically fine.
  • use_raster Load the map svg or raster images. If the map svg is loaded, raster images are automatically generated in the accompanying folder.
  • map_path Path to map file.
  • svg_res Resolution of the map in px/m. If not specified, the localizer will try to estimate.
  • svg_origin_x Origin of the map in pixel coordinates, x value. Used only for ground truth visualization
  • svg_origin_y Origin of the map in pixel coordinates, y value.
  • use_motion_prior If true, use the provided motion estimate. Otherwise, use 0 velocity prior.
  • num_particles Number of particles to use in the filter.
  • filter_pos_cov Motion prior uncertainty in position.
  • filter_theta_cov Motion prior uncertainty in bearing.
  • filter_regularization Gamma in the paper, see for more details.

Citation

If you find this work or datasets helpful, please cite

@ARTICLE{9361130,
author={I. D. {Miller} and A. {Cowley} and R. {Konkimalla} and S. S. {Shivakumar} and T. {Nguyen} and T. {Smith} and C. J. {Taylor} and V. {Kumar}},
journal={IEEE Robotics and Automation Letters},
title={Any Way You Look at It: Semantic Crossview Localization and Mapping With LiDAR},
year={2021},
volume={6},
number={2},
pages={2397-2404},
doi={10.1109/LRA.2021.3061332}}
Owner
Ian D. Miller
Currently a PhD student at Penn in Electrical and Systems Engineering under Prof. Vijay Kumar.
Ian D. Miller
Training, generation, and analysis code for Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics

Location-Aware Generative Adversarial Networks (LAGAN) for Physics Synthesis This repository contains all the code used in L. de Oliveira (@lukedeo),

Deep Learning for HEP 57 Oct 22, 2022
This repository contains pre-trained models and some evaluation code for our paper Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Contriever: Towards Unsupervised Dense Information Retrieval with Contrastive Learning This repository contains pre-trained models and some evaluation

Meta Research 207 Jan 08, 2023
Open-source python package for the extraction of Radiomics features from 2D and 3D images and binary masks.

pyradiomics v3.0.1 Build Status Linux macOS Windows Radiomics feature extraction in Python This is an open-source python package for the extraction of

Artificial Intelligence in Medicine (AIM) Program 842 Dec 28, 2022
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation

Multipath RefineNet A MATLAB based framework for semantic image segmentation and general dense prediction tasks on images. This is the source code for

Guosheng Lin 575 Dec 06, 2022
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | 简体中文 Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

OpenMMLab 899 Jan 02, 2023
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone

YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone In our recent paper we propose the YourTTS model. YourTTS bri

Edresson Casanova 390 Dec 29, 2022
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
Source code for our CVPR 2019 paper - PPGNet: Learning Point-Pair Graph for Line Segment Detection

PPGNet: Learning Point-Pair Graph for Line Segment Detection PyTorch implementation of our CVPR 2019 paper: PPGNet: Learning Point-Pair Graph for Line

SVIP Lab 170 Oct 25, 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations

HIVE: Evaluating the Human Interpretability of Visual Explanations Project Page | Paper This repo provides the code for HIVE, a human evaluation frame

Princeton Visual AI Lab 16 Dec 13, 2022
The full training script for Enformer (Tensorflow Sonnet) on TPU clusters

Enformer TPU training script (wip) The full training script for Enformer (Tensorflow Sonnet) on TPU clusters, in an effort to migrate the model to pyt

Phil Wang 10 Oct 19, 2022
ICNet for Real-Time Semantic Segmentation on High-Resolution Images, ECCV2018

ICNet for Real-Time Semantic Segmentation on High-Resolution Images by Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, Jiaya Jia, details a

Hengshuang Zhao 594 Dec 31, 2022
Code for the Higgs Boson Machine Learning Challenge organised by CERN & EPFL

A method to solve the Higgs boson challenge using Least Squares - Novae This project is the Project 1 of EPFL CS-433 Machine Learning. The project is

Giacomo Orsi 1 Nov 09, 2021
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Payphone 8 Nov 21, 2022
Meli Data Challenge 2021 - First Place Solution

My solution for the Meli Data Challenge 2021

Matias Moreyra 23 Mar 09, 2022
A simple approach to emable dense segmentation with ViT.

Vision Transformer Segmentation Network This implementation of ViT in pytorch uses a super simple and straight-forward way of generating an output of

HReynaud 5 Jan 03, 2023
AgeGuesser: deep learning based age estimation system. Powered by EfficientNet and Yolov5

AgeGuesser AgeGuesser is an end-to-end, deep-learning based Age Estimation system, presented at the CAIP 2021 conference. You can find the related pap

5 Nov 10, 2022
The repo contains the code of the ACL2020 paper `Dice Loss for Data-imbalanced NLP Tasks`

Dice Loss for NLP Tasks This repository contains code for Dice Loss for Data-imbalanced NLP Tasks at ACL2020. Setup Install Package Dependencies The c

223 Dec 17, 2022
[ACM MM 2021] TSA-Net: Tube Self-Attention Network for Action Quality Assessment

Tube Self-Attention Network (TSA-Net) This repository contains the PyTorch implementation for paper TSA-Net: Tube Self-Attention Network for Action Qu

ShunliWang 18 Dec 23, 2022
Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Vector Neurons: A General Framework for SO(3)-Equivariant Networks Created by Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacc

Congyue Deng 332 Dec 29, 2022
Official implementation of the paper "Steganographer Detection via a Similarity Accumulation Graph Convolutional Network"

SAGCN - Official PyTorch Implementation | Paper | Project Page This is the official implementation of the paper "Steganographer detection via a simila

ZHANG Zhi 1 Nov 26, 2021