Motion planning environment for Sampling-based Planners

Overview

Sampling-Based Motion Planners' Testing Environment

Python version CI Build docs Code style: black License DOI

Sampling-based motion planners' testing environment (sbp-env) is a full feature framework to quickly test different sampling-based algorithms for motion planning. sbp-env focuses on the flexibility of tinkering with different aspects of the framework, and had divided the main planning components into two categories (i) samplers and (ii) planners.

The focus of motion planning research had been mainly on (i) improving the sampling efficiency (with methods such as heuristic or learned distribution) and (ii) the algorithmic aspect of the planner using different routines to build a connected graph. Therefore, by separating the two components one can quickly swap out different components to test novel ideas.

Have a look at the documentations for more detail information. If you are looking for the previous code for the RRdT* paper it is now archived at soraxas/rrdt.

Installation

Optional

I recommend first creates a virtual environment with

# assumes python3 and bash shell
python -m venv sbp_env
source sbp_env/bin/activate

Install dependencies

You can install all the needed packages with pip.

pip install -r requirements.txt

There is also an optional dependency on klampt if you want to use the 3D simulator. Refer to its installation guide for details.

Quick Guide

You can get a detailed help message with

python main.py --help

but the basic syntax is

python main.py <PLANNER> <MAP> [options]

It will open a new window that display a map on it. Every white pixel is assumed to be free, and non-white pixels are obstacles. You will need to use your mouse to select two points on the map, the first will be set as the starting point and the second as the goal point.

Demos

Run maps with different available Planners

This repository contains a framework to performs quick experiments for Sampling-Based Planners (SBPs) that are implemented in Python. The followings are planners that had implemented and experimented in this framework.

Note that the commands shown in the respective demos can be customised with additional options. In fact, the actual command format used for the demonstrations is

python main.py <PLANNER> maps/room1.png start <sx>,<sy> goal <sx>,<sy> -vv

to have a fix set of starting and goal points for consistent visualisation, but we omitted the start/goal options in the following commands for clarity.

RRdT*

python main.py rrdt maps/room1.png -vv

RRdT* Planner

RRT*

python main.py rrt maps/room1.png -vv

RRT* Planner

Bi-RRT*

python main.py birrt maps/room1.png -vv

Bi-RRT* Planner

Informed RRT*

python main.py informedrrt maps/room1.png -vv

Informed RRT* Planner

The red ellipse shown is the dynamic sampling area for Informed RRT*

Others

There are also some other planners included in this repository. Some are preliminary planner that inspired RRdT*, some are planners with preliminary ideas, and some are useful for debugging.

Reference to this repository

You can use the following citation if you use this repository for your research

@article{lai2021SbpEnv,
  doi = {10.21105/joss.03782},
  url = {https://doi.org/10.21105/joss.03782},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {66},
  pages = {3782},
  author = {Tin Lai},
  title = {sbp-env: A Python Package for Sampling-based Motion Planner and Samplers},
  journal = {Journal of Open Source Software}
}
Comments
  • question on (example) usage

    question on (example) usage

    According to the submitted paper, with sbp-env "one can quickly swap out different components to test novel ideas" and "validate ... hypothesis rapidly". However, from the examples in the documentation, it is unclear to me how I can obtain performance metrics on the planners when a run a test.

    Is there a way to save such metrics to a file or print them when running planners in sbp-env? If not, this might be a nice feature to implement in a future version. Otherwise, you could consider adding an example to the documentation on how to compare different planners in the same scenario.

    (this question is part of the JOSS review openjournals/joss-reviews#3782)

    opened by OlgerSiebinga 5
  • Path recognition issue

    Path recognition issue

    I tried some source, destination positions with the following command and there seems some issue in recognition of the path. python main.py rrt maps/4d.png --engine 4d

    Attaching screenshot below: Screenshot from 2021-10-07 00-43-15

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 3
  • Python version compatibility with scipy

    Python version compatibility with scipy

    Mentioning the requirement of python version >= 3.8 in README would also help users the way it's done over here. Python versions < 3.8 are not compatible with scipy 1.6

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 3
  • Suggestion to make installation easier

    Suggestion to make installation easier

    I was wondering why you have the following remark block in your installation instructions: image

    I think it would be easier to add those two packages to the file requirements_klampt.txt. That way they'll be installed automatically, it saves the user an extra action. Or is there any reason I'm missing why that shouldn't be done?

    opened by OlgerSiebinga 3
  • Exception after running the example from the documentation

    Exception after running the example from the documentation

    When I run the example from the quick start page in the documentation, an exception occurs.

    The command: python main.py rrt maps/room1.png

    The exception:

    Traceback (most recent call last):
      File "main.py", line 287, in <module>
        environment.run()
      File "C:\Users\Olger\PycharmProjects\sbp-env\env.py", line 198, in run
        self.visualiser.terminates_hook()
      File "C:\Users\Olger\PycharmProjects\sbp-env\visualiser.py", line 148, in terminates_hook
        self.env_instance.sampler.visualiser.terminates_hook()
      File "C:\Users\Olger\PycharmProjects\sbp-env\env.py", line 126, in __getattr__
        return object.__getattribute__(self.visualiser, attr)
    AttributeError: 'PygameEnvVisualiser' object has no attribute 'sampler'
    

    The exception only occurs after the simulation has finished so it seems like a minor problem. Although I'm not really sure what happens at env.py, line 126, in __getattr__ and why. So, I don't have a proposed fix.

    opened by OlgerSiebinga 2
  • invalid start and goal point can be specified with command-line interface

    invalid start and goal point can be specified with command-line interface

    When specifying a goal and start point in the commands line, it is possible to specify invalid points. Specifying an invalid start and goal will result in an infinite loop.

    For example, running: python main.py rrt maps\room1.png start 10,10 goal 15,15, will result in an infinite loop with the following GUI:

    image

    Expected behavior when supplying an invalid option would be an exception.

    opened by OlgerSiebinga 1
  • Test Instructions

    Test Instructions

    Though it's standard, adding instruction to run tests in the documentation might be helpful for users wanting to contribute.

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 1
  • Graph building of prm planner without user information

    Graph building of prm planner without user information

    The graph building method in the prm planner (build_graph() in prmPlanner.py) can take quite some time when a large number of nodes is used. However, the user is not notified that the planner is still processing data. The first time I encountered this, I suspected the software got stuck in an infinite loop because the window was not responding anymore. I think this can be easily fixed by adding a tqdm bar in the build_graph() method (at line 83)

    (this suggestion is part of the JOSS review openjournals/joss-reviews#3782)

    opened by OlgerSiebinga 1
  • Skip-optimality Problem

    Skip-optimality Problem

    Hi 1.I am wonderingt that the parameter (use_rtree)in choose_least_cost_parent() function and rewire() funtion (RRT). Is it no longer necessary because we use numpy's calculation method? 2. When i run the informedrrt algorithm, the ellipse display of the graphic drawing does not appear as shown in the document. How can it be displayed? I'm sorry to interrupt you from your busy schedule.

    opened by Jiawei-00 7
Releases(v2.0.1)
This is a collection of our NAS and Vision Transformer work.

AutoML - Neural Architecture Search This is a collection of our AutoML-NAS work iRPE (NEW): Rethinking and Improving Relative Position Encoding for Vi

Microsoft 828 Dec 28, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
Pytorch implementation of our paper under review -- 1xN Pattern for Pruning Convolutional Neural Networks

1xN Pattern for Pruning Convolutional Neural Networks (paper) . This is Pytorch re-implementation of "1xN Pattern for Pruning Convolutional Neural Net

Mingbao Lin (林明宝) 29 Nov 29, 2022
[Arxiv preprint] Causality-inspired Single-source Domain Generalization for Medical Image Segmentation (code&data-processing pipeline)

Causality-inspired Single-source Domain Generalization for Medical Image Segmentation Arxiv preprint Repository under construction. Might still be bug

Cheng 31 Dec 27, 2022
PyTorch implementation for NED. It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.

Neural Emotion Director (NED) - Official Pytorch Implementation Example video of facial emotion manipulation while retaining the original mouth motion

Foivos Paraperas 89 Dec 23, 2022
Algorithmic trading with deep learning experiments

Deep-Trading Algorithmic trading with deep learning experiments. Now released part one - simple time series forecasting. I plan to implement more soph

Alex Honchar 1.4k Jan 02, 2023
UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus

UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus General info This is

71 Oct 25, 2022
Sequence lineage information extracted from RKI sequence data repo

Pango lineage information for German SARS-CoV-2 sequences This repository contains a join of the metadata and pango lineage tables of all German SARS-

Cornelius Roemer 24 Oct 26, 2022
ML-Decoder: Scalable and Versatile Classification Head

ML-Decoder: Scalable and Versatile Classification Head Paper Official PyTorch Implementation Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baru

189 Jan 04, 2023
AdaFocus (ICCV 2021) Adaptive Focus for Efficient Video Recognition

AdaFocus (ICCV 2021) This repo contains the official code and pre-trained models for AdaFocus. Adaptive Focus for Efficient Video Recognition Referenc

Rainforest Wang 115 Dec 21, 2022
A Multi-attribute Controllable Generative Model for Histopathology Image Synthesis

A Multi-attribute Controllable Generative Model for Histopathology Image Synthesis This is the pytorch implementation for our MICCAI 2021 paper. A Mul

Jiarong Ye 7 Apr 04, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
Regularized Frank-Wolfe for Dense CRFs: Generalizing Mean Field and Beyond

CRF - Conditional Random Fields A library for dense conditional random fields (CRFs). This is the official accompanying code for the paper Regularized

Đ.Khuê Lê-Huu 21 Nov 26, 2022
A GridMixup augmentation, inspired by GridMask and CutMix

GridMixup A GridMixup augmentation, inspired by GridMask and CutMix Easy install pip install git+https://github.com/IlyaDobrynin/GridMixup.git Overvie

IlyaDo 42 Dec 28, 2022
Space Invaders For Python

Space-Invaders Just download or clone the git repository. To run the Space Invader game you need to have pyhton installed in you system. If you dont h

Fei 5 Jul 27, 2022
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

71 Dec 19, 2022
PyTorch implementation of Trust Region Policy Optimization

PyTorch implementation of TRPO Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

Ilya Kostrikov 366 Nov 15, 2022
ProjectOxford-ClientSDK - This repo has moved :house: Visit our website for the latest SDKs & Samples

This project has moved 🏠 We heard your feedback! This repo has been deprecated and each project has moved to a new home in a repo scoped by API and p

Microsoft 970 Nov 28, 2022
DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

dm_control: DeepMind Infrastructure for Physics-Based Simulation. DeepMind's software stack for physics-based simulation and Reinforcement Learning en

DeepMind 3k Dec 31, 2022