Official page of Patchwork (RA-L'21 w/ IROS'21)

Overview

Patchwork

Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor", which is accepted by RA-L with IROS'21 option

[Video] [Preprint Paper] [Project Wiki]

Patchwork Concept of our method (CZM & GLE)

It's an overall updated version of R-GPF of ERASOR [Code] [Paper].


Demo

KITTI 00

Rough Terrain


Characteristics

  • Single hpp file (include/patchwork/patchwork.hpp)

  • Robust ground consistency

As shown in the demo videos and below figure, our method shows the most promising robust performance compared with other state-of-the-art methods, especially, our method focuses on the little perturbation of precision/recall as shown in this figure.

Please kindly note that the concept of traversable area and ground is quite different! Please refer to our paper.

Contents

  1. Test Env.
  2. Requirements
  3. How to Run Patchwork
  4. Citation

Test Env.

The code is tested successfully at

  • Linux 18.04 LTS
  • ROS Melodic

Requirements

ROS Setting

    1. Install ROS on a machine.
    1. Thereafter, jsk-visualization is required to visualize Ground Likelihood Estimation status.
sudo apt-get install ros-melodic-jsk-recognition
sudo apt-get install ros-melodic-jsk-common-msgs
sudo apt-get install ros-melodic-jsk-rviz-plugins
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/LimHyungTae/patchwork.git
cd .. && catkin build patchwork 

How to Run Patchwork

We provide three examples

  • Offline KITTI dataset
  • Online (ROS Callback) KITTI dataset
  • Own dataset using pcd files

Offline KITTI dataset

  1. Download SemanticKITTI Odometry dataset (We also need labels since we also open the evaluation code! :)

  2. Set the data_path in launch/offline_kitti.launch for your machine.

The data_path consists of velodyne folder and labels folder as follows:

data_path (e.g. 00, 01, ..., or 10)
_____velodyne
     |___000000.bin
     |___000001.bin
     |___000002.bin
     |...
_____labels
     |___000000.label
     |___000001.label
     |___000002.label
     |...
_____...
   
  1. Run launch file
roslaunch patchwork offline_kitti.launch

You can directly feel the speed of Patchwork! 😉

Online (ROS Callback) KITTI dataset

We also provide rosbag example. If you run our patchwork via rosbag, please refer to this example.

  1. Download readymade rosbag
wget https://urserver.kaist.ac.kr/publicdata/patchwork/kitti_00_xyzilid.bag
  1. After building this package, run the roslaunch as follows:
roslaunch patchwork rosbag_kitti.launch
  1. Then play the rosbag file in another command
rosbag play kitti_00_xyzilid.bag

Own dataset using pcd files

Please refer to /nodes/offilne_own_data.cpp.

(Note that in your own data format, there may not exist ground truth labels!)

Be sure to set right params. Otherwise, your results may be wrong as follows:

W/ wrong params After setting right params

For better understanding of the parameters of Patchwork, please read our wiki, 4. IMPORTANT: Setting Parameters of Patchwork in Your Own Env..

Offline (Using *.pcd or *.bin file)

  1. Utilize /nodes/offilne_own_data.cpp

  2. Please check the output by following command and corresponding files:

roslaunch patchwork offline_ouster128.launch

Online (via rosbag)

  1. Utilize rosbag_kitti.launch.

  2. To do so, remap the topic of subscriber, e.g. add remap line as follows:

<remap from="/node" to="$YOUR_LIDAR_TOPIC_NAME$"/>
  1. In addition, minor modification of ros_kitti.cpp is necessary by refering to offline_own_data.cpp.

Citation

If you use our code or method in your work, please consider citing the following:

@article{lim2021patchwork,
title={Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor},
author={Lim, Hyungtae and Minho, Oh and Myung, Hyun},
journal={IEEE Robotics and Automation Letters},
year={2021}
}

Description

All explanations of parameters and other experimental results will be uploaded in wiki

Contact

If you have any questions, please let me know:

TODO List

  • Add ROS support
  • Add preprint paper
  • Add demo videos
  • Add own dataset examples
  • Update wiki

Owner
Hyungtae Lim
Ph.D Candidate of URL lab. @ KAIST, South Korea
Hyungtae Lim
This repository contains the official code of the paper Equivariant Subgraph Aggregation Networks (ICLR 2022)

Equivariant Subgraph Aggregation Networks (ESAN) This repository contains the official code of the paper Equivariant Subgraph Aggregation Networks (IC

Beatrice Bevilacqua 59 Dec 13, 2022
Veri Setinizi Yolov5 Formatına Dönüştürün

Veri Setinizi Yolov5 Formatına Dönüştürün! Bu Repo da Neler Var? Xml Formatındaki Veri Setini .Txt Formatına Çevirme Xml Formatındaki Dosyaları Silme

Kadir Nar 4 Aug 22, 2022
You Only Look Once for Panopitic Driving Perception

You Only 👀 Once for Panoptic 🚗 Perception You Only Look at Once for Panoptic driving Perception by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wan

Hust Visual Learning Team 1.4k Jan 04, 2023
LEAP: Learning Articulated Occupancy of People

LEAP: Learning Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2021 submission LEAP: Lear

Neural Bodies 60 Nov 18, 2022
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
GazeScroller - Using Facial Movements to perform Hands-free Gesture on the system

GazeScroller Using Facial Movements to perform Hands-free Gesture on the system

2 Jan 05, 2022
A curated list of awesome Machine Learning frameworks, libraries and software.

Awesome Machine Learning A curated list of awesome machine learning frameworks, libraries and software (by language). Inspired by awesome-php. If you

Joseph Misiti 57.1k Jan 03, 2023
BMVC 2021: This is the github repository for "Few Shot Temporal Action Localization using Query Adaptive Transformers" accepted in British Machine Vision Conference (BMVC) 2021, Virtual

FS-QAT: Few Shot Temporal Action Localization using Query Adaptive Transformer Accepted as Poster in BMVC 2021 This is an official implementation in P

Sauradip Nag 14 Dec 09, 2022
An University Project of Quera Web Crawling.

WebCrawlerProject An University Project of Quera Web Crawling. خزشگر اینستاگرام در این پروژه شما باید با استفاده از کتابخانه های زیر یک خزشگر اینستاگر

Mahdi 3 Aug 12, 2022
Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Retrieval.

Targeted Trojan-Horse Attacks on Language-based Image Retrieval Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Re

fine 7 Aug 23, 2022
Combining Diverse Feature Priors

Combining Diverse Feature Priors This repository contains code for reproducing the results of our paper. Paper: https://arxiv.org/abs/2110.08220 Blog

Madry Lab 5 Nov 12, 2022
Transformer model implemented with Pytorch

transformer-pytorch Transformer model implemented with Pytorch Attention is all you need-[Paper] Architecture Self-Attention self_attention.py class

Mingu Kang 12 Sep 03, 2022
Preprossing-loan-data-with-NumPy - In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United States.

Preprossing-loan-data-with-NumPy In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United

Dhawal Chitnavis 2 Jan 03, 2022
Code for ACL2021 long paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

LANKA This is the source code for paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases (ACL 2021, long paper) Referen

Boxi Cao 30 Oct 24, 2022
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
ExCon: Explanation-driven Supervised Contrastive Learning

ExCon: Explanation-driven Supervised Contrastive Learning Link to the paper: https://arxiv.org/pdf/2111.14271.pdf Contributors of this repo: Zhibo Zha

Zhibo (Darren) Zhang 18 Nov 01, 2022
Demo project for real time anomaly detection using kafka and python

kafkaml-anomaly-detection Project for real time anomaly detection using kafka and python It's assumed that zookeeper and kafka are running in the loca

Rodrigo Arenas 36 Dec 12, 2022
The code of paper 'Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection'

Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection Pytorch implemetation of paper 'Learning to Aggregate and Personalize

Tencent YouTu Research 136 Dec 29, 2022
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

349 Dec 08, 2022
Code for ICDM2020 full paper: "Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning"

Subg-Con Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning (Jiao et al., ICDM 2020): https://arxiv.org/abs/2009.10273 Over

34 Jul 06, 2022