Motion planning environment for Sampling-based Planners

Overview

Sampling-Based Motion Planners' Testing Environment

Python version CI Build docs Code style: black License DOI

Sampling-based motion planners' testing environment (sbp-env) is a full feature framework to quickly test different sampling-based algorithms for motion planning. sbp-env focuses on the flexibility of tinkering with different aspects of the framework, and had divided the main planning components into two categories (i) samplers and (ii) planners.

The focus of motion planning research had been mainly on (i) improving the sampling efficiency (with methods such as heuristic or learned distribution) and (ii) the algorithmic aspect of the planner using different routines to build a connected graph. Therefore, by separating the two components one can quickly swap out different components to test novel ideas.

Have a look at the documentations for more detail information. If you are looking for the previous code for the RRdT* paper it is now archived at soraxas/rrdt.

Installation

Optional

I recommend first creates a virtual environment with

# assumes python3 and bash shell
python -m venv sbp_env
source sbp_env/bin/activate

Install dependencies

You can install all the needed packages with pip.

pip install -r requirements.txt

There is also an optional dependency on klampt if you want to use the 3D simulator. Refer to its installation guide for details.

Quick Guide

You can get a detailed help message with

python main.py --help

but the basic syntax is

python main.py <PLANNER> <MAP> [options]

It will open a new window that display a map on it. Every white pixel is assumed to be free, and non-white pixels are obstacles. You will need to use your mouse to select two points on the map, the first will be set as the starting point and the second as the goal point.

Demos

Run maps with different available Planners

This repository contains a framework to performs quick experiments for Sampling-Based Planners (SBPs) that are implemented in Python. The followings are planners that had implemented and experimented in this framework.

Note that the commands shown in the respective demos can be customised with additional options. In fact, the actual command format used for the demonstrations is

python main.py <PLANNER> maps/room1.png start <sx>,<sy> goal <sx>,<sy> -vv

to have a fix set of starting and goal points for consistent visualisation, but we omitted the start/goal options in the following commands for clarity.

RRdT*

python main.py rrdt maps/room1.png -vv

RRdT* Planner

RRT*

python main.py rrt maps/room1.png -vv

RRT* Planner

Bi-RRT*

python main.py birrt maps/room1.png -vv

Bi-RRT* Planner

Informed RRT*

python main.py informedrrt maps/room1.png -vv

Informed RRT* Planner

The red ellipse shown is the dynamic sampling area for Informed RRT*

Others

There are also some other planners included in this repository. Some are preliminary planner that inspired RRdT*, some are planners with preliminary ideas, and some are useful for debugging.

Reference to this repository

You can use the following citation if you use this repository for your research

@article{lai2021SbpEnv,
  doi = {10.21105/joss.03782},
  url = {https://doi.org/10.21105/joss.03782},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {66},
  pages = {3782},
  author = {Tin Lai},
  title = {sbp-env: A Python Package for Sampling-based Motion Planner and Samplers},
  journal = {Journal of Open Source Software}
}
Comments
  • question on (example) usage

    question on (example) usage

    According to the submitted paper, with sbp-env "one can quickly swap out different components to test novel ideas" and "validate ... hypothesis rapidly". However, from the examples in the documentation, it is unclear to me how I can obtain performance metrics on the planners when a run a test.

    Is there a way to save such metrics to a file or print them when running planners in sbp-env? If not, this might be a nice feature to implement in a future version. Otherwise, you could consider adding an example to the documentation on how to compare different planners in the same scenario.

    (this question is part of the JOSS review openjournals/joss-reviews#3782)

    opened by OlgerSiebinga 5
  • Path recognition issue

    Path recognition issue

    I tried some source, destination positions with the following command and there seems some issue in recognition of the path. python main.py rrt maps/4d.png --engine 4d

    Attaching screenshot below: Screenshot from 2021-10-07 00-43-15

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 3
  • Python version compatibility with scipy

    Python version compatibility with scipy

    Mentioning the requirement of python version >= 3.8 in README would also help users the way it's done over here. Python versions < 3.8 are not compatible with scipy 1.6

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 3
  • Suggestion to make installation easier

    Suggestion to make installation easier

    I was wondering why you have the following remark block in your installation instructions: image

    I think it would be easier to add those two packages to the file requirements_klampt.txt. That way they'll be installed automatically, it saves the user an extra action. Or is there any reason I'm missing why that shouldn't be done?

    opened by OlgerSiebinga 3
  • Exception after running the example from the documentation

    Exception after running the example from the documentation

    When I run the example from the quick start page in the documentation, an exception occurs.

    The command: python main.py rrt maps/room1.png

    The exception:

    Traceback (most recent call last):
      File "main.py", line 287, in <module>
        environment.run()
      File "C:\Users\Olger\PycharmProjects\sbp-env\env.py", line 198, in run
        self.visualiser.terminates_hook()
      File "C:\Users\Olger\PycharmProjects\sbp-env\visualiser.py", line 148, in terminates_hook
        self.env_instance.sampler.visualiser.terminates_hook()
      File "C:\Users\Olger\PycharmProjects\sbp-env\env.py", line 126, in __getattr__
        return object.__getattribute__(self.visualiser, attr)
    AttributeError: 'PygameEnvVisualiser' object has no attribute 'sampler'
    

    The exception only occurs after the simulation has finished so it seems like a minor problem. Although I'm not really sure what happens at env.py, line 126, in __getattr__ and why. So, I don't have a proposed fix.

    opened by OlgerSiebinga 2
  • invalid start and goal point can be specified with command-line interface

    invalid start and goal point can be specified with command-line interface

    When specifying a goal and start point in the commands line, it is possible to specify invalid points. Specifying an invalid start and goal will result in an infinite loop.

    For example, running: python main.py rrt maps\room1.png start 10,10 goal 15,15, will result in an infinite loop with the following GUI:

    image

    Expected behavior when supplying an invalid option would be an exception.

    opened by OlgerSiebinga 1
  • Test Instructions

    Test Instructions

    Though it's standard, adding instruction to run tests in the documentation might be helpful for users wanting to contribute.

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 1
  • Graph building of prm planner without user information

    Graph building of prm planner without user information

    The graph building method in the prm planner (build_graph() in prmPlanner.py) can take quite some time when a large number of nodes is used. However, the user is not notified that the planner is still processing data. The first time I encountered this, I suspected the software got stuck in an infinite loop because the window was not responding anymore. I think this can be easily fixed by adding a tqdm bar in the build_graph() method (at line 83)

    (this suggestion is part of the JOSS review openjournals/joss-reviews#3782)

    opened by OlgerSiebinga 1
  • Skip-optimality Problem

    Skip-optimality Problem

    Hi 1.I am wonderingt that the parameter (use_rtree)in choose_least_cost_parent() function and rewire() funtion (RRT). Is it no longer necessary because we use numpy's calculation method? 2. When i run the informedrrt algorithm, the ellipse display of the graphic drawing does not appear as shown in the document. How can it be displayed? I'm sorry to interrupt you from your busy schedule.

    opened by Jiawei-00 7
Releases(v2.0.1)
A model which classifies reviews as positive or negative.

SentiMent Analysis In this project I built a model to classify movie reviews fromn the IMDB dataset of 50K reviews. WordtoVec : Neural networks only w

Rishabh Bali 2 Feb 09, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Adversarial Texture Optimization from RGB-D Scans (CVPR 2020).

AdversarialTexture Adversarial Texture Optimization from RGB-D Scans (CVPR 2020). Scanning Data Download Please refer to data directory for details. B

Jingwei Huang 153 Nov 28, 2022
Iris prediction model is used to classify iris species created julia's DecisionTree, DataFrames, JLD2, PlotlyJS and Statistics packages.

Iris Species Predictor Iris prediction is used to classify iris species using their sepal length, sepal width, petal length and petal width created us

Siva Prakash 2 Jan 06, 2022
An imperfect information game is a type of game with asymmetric information

DecisionHoldem An imperfect information game is a type of game with asymmetric information. Compared with perfect information game, imperfect informat

Decision AI 25 Dec 23, 2022
Official Implementation of PCT

Official Implementation of PCT Prerequisites python == 3.8.5 Please make sure you have the following libraries installed: numpy torch=1.4.0 torchvisi

32 Nov 21, 2022
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision Project | Arxiv | Abstract It is very challenging for various visual tasks such as image

CVSM Group - email: <a href=[email protected]"> 377 Jan 07, 2023
Implementation of C-RNN-GAN.

Implementation of C-RNN-GAN. Publication: Title: C-RNN-GAN: Continuous recurrent neural networks with adversarial training Information: http://mogren.

Olof Mogren 427 Dec 25, 2022
SmartSim Infrastructure Library.

Home Install Documentation Slack Invite Cray Labs SmartSim SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and Ten

Cray Labs 139 Jan 01, 2023
Using fully convolutional networks for semantic segmentation with caffe for the cityscapes dataset

Using fully convolutional networks for semantic segmentation (Shelhamer et al.) with caffe for the cityscapes dataset How to get started Download the

Simon Guist 27 Jun 06, 2022
[ICCV '21] In this repository you find the code to our paper Keypoint Communities

Keypoint Communities In this repository you will find the code to our ICCV '21 paper: Keypoint Communities Duncan Zauss, Sven Kreiss, Alexandre Alahi,

Duncan Zauss 262 Dec 13, 2022
Learning to Reach Goals via Iterated Supervised Learning

Vanilla GCSL This repository contains a vanilla implementation of "Learning to Reach Goals via Iterated Supervised Learning" proposed by Dibya Gosh et

Christoph Heindl 4 Aug 10, 2022
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
PyTorch implementation for paper Neural Marching Cubes.

NMC PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang. Paper | Supplementary Material (to be updated) Citation If you fin

Zhiqin Chen 109 Dec 27, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

24 Nov 02, 2022
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction

Video_Pace This repository contains the code for the following paper: Jiangliu Wang, Jianbo Jiao and Yunhui Liu, "Self-Supervised Video Representation

Jiangliu Wang 95 Dec 14, 2022
Space Time Recurrent Memory Network - Pytorch

Space Time Recurrent Memory Network - Pytorch (wip) Implementation of Space Time Recurrent Memory Network, recurrent network competitive with attentio

Phil Wang 50 Nov 07, 2021
PyTorch implementation of the implicit Q-learning algorithm (IQL)

Implicit-Q-Learning (IQL) PyTorch implementation of the implicit Q-learning algorithm IQL (Paper) Currently only implemented for online learning. Offl

Sebastian Dittert 27 Dec 30, 2022
This repository contains the official code of the paper Equivariant Subgraph Aggregation Networks (ICLR 2022)

Equivariant Subgraph Aggregation Networks (ESAN) This repository contains the official code of the paper Equivariant Subgraph Aggregation Networks (IC

Beatrice Bevilacqua 59 Dec 13, 2022