Spatial Intention Maps for Multi-Agent Mobile Manipulation (ICRA 2021)

Overview

spatial-intention-maps

This code release accompanies the following paper:

Spatial Intention Maps for Multi-Agent Mobile Manipulation

Jimmy Wu, Xingyuan Sun, Andy Zeng, Shuran Song, Szymon Rusinkiewicz, Thomas Funkhouser

IEEE International Conference on Robotics and Automation (ICRA), 2021

Project Page | PDF | arXiv | Video

Abstract: The ability to communicate intention enables decentralized multi-agent robots to collaborate while performing physical tasks. In this work, we present spatial intention maps, a new intention representation for multi-agent vision-based deep reinforcement learning that improves coordination between decentralized mobile manipulators. In this representation, each agent's intention is provided to other agents, and rendered into an overhead 2D map aligned with visual observations. This synergizes with the recently proposed spatial action maps framework, in which state and action representations are spatially aligned, providing inductive biases that encourage emergent cooperative behaviors requiring spatial coordination, such as passing objects to each other or avoiding collisions. Experiments across a variety of multi-agent environments, including heterogeneous robot teams with different abilities (lifting, pushing, or throwing), show that incorporating spatial intention maps improves performance for different mobile manipulation tasks while significantly enhancing cooperative behaviors.

Installation

We recommend using a conda environment for this codebase. The following commands will set up a new conda environment with the correct requirements (tested on Ubuntu 18.04.3 LTS):

# Create and activate new conda env
conda create -y -n my-conda-env python=3.7.10
conda activate my-conda-env

# Install pytorch (also installs numpy==1.19.2 as dependency)
conda install -y pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch

# Install pip requirements
pip install -r requirements.txt

# Install shortest paths module (used in simulation environment)
cd shortest_paths
python setup.py build_ext --inplace

Quickstart

We provide pretrained policies for each test environment. The download-pretrained.sh script will download the pretrained policies and save their configs and network weights into the logs and checkpoints directories, respectively. Use the following command to run it:

./download-pretrained.sh

You can then use enjoy.py to run a pretrained policy in the simulation environment. Here are a few examples you can try:

# 4 lifting robots
python enjoy.py --config-path logs/20201217T171233203789-lifting_4-small_divider-ours/config.yml
python enjoy.py --config-path logs/20201214T092812731965-lifting_4-large_empty-ours/config.yml

# 4 pushing robots
python enjoy.py --config-path logs/20201214T092814688334-pushing_4-small_divider-ours/config.yml
python enjoy.py --config-path logs/20201217T171253620771-pushing_4-large_empty-ours/config.yml

# 2 lifting + 2 pushing
python enjoy.py --config-path logs/20201214T092812868257-lifting_2_pushing_2-large_empty-ours/config.yml

# 2 lifting + 2 throwing
python enjoy.py --config-path logs/20201217T171253796927-lifting_2_throwing_2-large_empty-ours/config.yml

# 4 rescue robots
python enjoy.py --config-path logs/20210120T031916058932-rescue_4-small_empty-ours/config.yml

You should see the pretrained policy running in the PyBullet GUI that pops up. Here are a few examples of what it looks like (4x speed):

lifting_4-small_divider lifting_2_pushing_2-large_empty rescue_4-small_empty

You can also run enjoy.py without specifying a config path, and it will list all policies in the logs directory and allow you to pick one to run:

python enjoy.py

While the focus of this work is on multi-agent, the code also supports single-agent training. We provide a few pretrained single-agent policies which can be downloaded with the following command:

./download-pretrained.sh --single-agent

Here are a few example pretrained single-agent policies you can try:

# 1 lifting robot
python enjoy.py --config-path logs/20201217T171254022070-lifting_1-small_empty-base/config.yml

# 1 pushing robot
python enjoy.py --config-path logs/20201214T092813073846-pushing_1-small_empty-base/config.yml

# 1 rescue robot
python enjoy.py --config-path logs/20210119T200131797089-rescue_1-small_empty-base/config.yml

Here is what those policies look like when running in the PyBullet GUI (2x speed):

lifting_1-small_empty pushing_1-small_empty rescue_1-small_empty

Training in the Simulation Environment

The config/experiments directory contains the template config files used for all experiments in the paper. To start a training run, you can provide one of the template config files to the train.py script. For example, the following will train a policy on the SmallDivider environment:

python train.py config/experiments/ours/lifting_4-small_divider-ours.yml

The training script will create a log directory and checkpoint directory for the new training run inside logs/ and checkpoints/, respectively. Inside the log directory, it will also create a new config file called config.yml, which stores training run config variables and can be used to resume training or to load a trained policy for evaluation.

Simulation Environment

To interactively explore the simulation environment using our dense action space (spatial action maps), you can use tools_simple_gui.py, which will load an environment and allow you to click on the agent's local overhead map to select navigational endpoints (each pixel is an action). Some robot types (such as lifting) have a 2-channel action space, in which case you would use left click to move, and right click to move and then attempt an end effector action at the destination (such as lift or throw).

python tools_simple_gui.py

Note that tools_simple_gui.py currently only supports single-agent environments. We will release a separate GUI that works for multi-agent environments.

Evaluation

Trained policies can be evaluated using the evaluate.py script, which takes in the config path for the training run. For example, to evaluate the SmallDivider pretrained policy, you can run:

python evaluate.py --config-path logs/20201217T171233203789-lifting_4-small_divider-ours/config.yml

This will load the trained policy from the specified training run, and run evaluation on it. The results are saved to an .npy file in the eval directory. You can then run jupyter notebook and navigate to eval_summary.ipynb to load the .npy files and generate tables and plots of the results.

Running in the Real Environment

We train policies in simulation and run them directly on the real robot by mirroring the real environment inside the simulation. To do this, we first use ArUco markers to estimate 2D poses of robots and objects in the real environment, and then use the estimated poses to update the simulation. Note that setting up the real environment, particularly the marker pose estimation, can take a fair amount of time and effort.

Vector SDK Setup

If you previously ran pip install -r requirements.txt following the installation instructions above, the anki_vector library should already be installed. Run the following command to set up each robot you plan to use:

python -m anki_vector.configure

After the setup is complete, you can open the Vector config file located at ~/.anki_vector/sdk_config.ini to verify that all of your robots are present.

You can also run some of the official examples to verify that the setup procedure worked. For further reference, please see the Vector SDK documentation.

Connecting to the Vector

The following command will try to connect to all the robots in your Vector config file and keep them still. It will print out a message for each robot it successfully connects to, and can be used to verify that the Vector SDK can connect to all of your robots.

python vector_keep_still.py

Note: If you get the following error, you will need to make a small fix to the anki_vector library.

AttributeError: module 'anki_vector.connection' has no attribute 'CONTROL_PRIORITY_LEVEL'

Locate the anki_vector/behavior.py file inside your installed conda libraries. The full path should be in the error message. At the bottom of anki_vector/behavior.py, change connection.CONTROL_PRIORITY_LEVEL.RESERVE_CONTROL to connection.ControlPriorityLevel.RESERVE_CONTROL.


Sometimes the IP addresses of your robots will change. To update the Vector config file with new IP addresses, you can run the following command:

python vector_run_mdns.py

The script uses mDNS to find all Vector robots on the local network, and will automatically update their IP addresses in the Vector config file. It will also print out the hostname, IP address, and MAC address of every robot found. Make sure zeroconf is installed (pip install zeroconf) or mDNS may not work well. Alternatively, you can just open the Vector config file at ~/.anki_vector/sdk_config.ini in a text editor and manually update the IP addresses.

Controlling the Vector

The vector_keyboard_controller.py script is adapted from the remote control example in the official SDK, and can be used to verify that you are able to control the robot using the Vector SDK. Use it as follows:

python vector_keyboard_controller.py --robot-index ROBOT_INDEX

The --robot-index argument specifies the robot you wish to control and refers to the index of the robot in the Vector config file (~/.anki_vector/sdk_config.ini).

Building the Real Environment

Please reference the videos on the project page when building the real environment setup.

We built the walls using 50 cm x 44 mm strips of Elmer's Foam Board. We also use several 3D printed parts, which we printed using the Sindoh 3DWOX 1 3D printer (with PLA filament). All 3D model files are in the stl directory.

Here are the different parts to 3D print for the environment setup:

  • cube.stl: the objects that the robots will forage
  • wall-support.stl: triangular supports used to secure the walls to the tabletop
  • rounded-corner.stl: rounded blocks installed in corners of the environment to allow pushing through corners
  • board-corner.stl: used for pose estimation with ArUco markers

Additionally, a 3D printed attachment needs to be installed on each robot to enable its special abilities:

  • lifting-attachment.stl: attach to bottom of Vector's lift, allows the lifting robot (and rescue robot) to align with objects
  • pushing-attachment.stl: attach to front of Vector's lift, allows the pushing robot to push objects more predictably
  • throwing-attachment.stl: attach to arms of Vector's lift, allows the throwing robot to throw objects backwards

Note that all attachments need to be secured to the robot (using tape, for example). The robots will not be able to reliably execute their end effector action with loose attachments.

There are also a few things to print in the printouts directory:

  • back-covers.pdf: attach to back of throwing robot to make throws more consistent (recommend printing on cardstock)
  • receptacle.pdf: the target receptacle, install in the top right corner of the room

Running Trained Policies on the Real Robot

First see the aruco directory for instructions on setting up pose estimation with ArUco markers.

Once the setup is completed, make sure the pose estimation server is started before proceeding:

cd aruco
python server.py

We can use tools_simple_gui.py from before to manually control a robot in the real environment too, which will allow us to verify that all components of the real setup are working properly, including pose estimation and robot control. See the bottom of the main function in tools_simple_gui.py (L100) for the appropriate arguments. You will need to enable real and provide values for real_robot_indices and real_cube_indices. You can then run the same command from before to start the GUI:

python tools_simple_gui.py

You should see that the simulation environment in the PyBullet GUI mirrors the real setup with millimeter-level precision. If the poses in the simulation do not look correct, you can restart the pose estimation server with the --debug flag to enable debug visualizations:

cd aruco
python server.py --debug

As previously noted, tools_simple_gui.py currently only supports single-agent control. We will release a separate GUI that allows multi-agent control.


Once you have verified that manual control with tools_simple_gui.py works, you can then run a trained policy using enjoy.py from before. For example, to run the SmallDivider pretrained policy in the real environment, you can run:

python enjoy.py --config-path logs/20201217T171233203789-lifting_4-small_divider-ours/config.yml --real --real-robot-indices 0,1,2,3 --real-cube-indices 0,1,3,5,6,7,8,9,10,11

Citation

If you find this work useful for your research, please consider citing:

@inproceedings{wu2021spatial,
  title = {Spatial Intention Maps for Multi-Agent Mobile Manipulation},
  author = {Wu, Jimmy and Sun, Xingyuan and Zeng, Andy and Song, Shuran and Rusinkiewicz, Szymon and Funkhouser, Thomas},
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
  year = {2021}
}
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

北海若 48 Nov 14, 2022
CVPR2020 Counterfactual Samples Synthesizing for Robust VQA

CVPR2020 Counterfactual Samples Synthesizing for Robust VQA This repo contains code for our paper "Counterfactual Samples Synthesizing for Robust Visu

72 Dec 22, 2022
MPRNet-Cloud-removal: Progressive cloud removal

MPRNet-Cloud-removal Progressive cloud removal Requirements 1.Pytorch = 1.0 2.Python 3 3.NVIDIA GPU + CUDA 9.0 4.Tensorboard Installation 1.Clone the

Semi 95 Dec 18, 2022
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
Keras attention models including botnet,CoaT,CoAtNet,CMT,cotnet,halonet,resnest,resnext,resnetd,volo,mlp-mixer,resmlp,gmlp,levit

Keras_cv_attention_models Keras_cv_attention_models Usage Basic Usage Layers Model surgery AotNet ResNetD ResNeXt ResNetQ BotNet VOLO ResNeSt HaloNet

319 Dec 28, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 02, 2023
2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup)智能人机交互自然语言理解赛道第二名参赛解决方案

2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup) 智能人机交互自然语言理解赛道第二名解决方案 比赛网址: CCIR-Cup-智能人机交互自然语言理解 1.依赖环境: python==3.8 torch==1.7.1+cu110 numpy==1.19.2 transformers=

JinXiang 22 Oct 29, 2022
Code & Data for Enhancing Photorealism Enhancement

Enhancing Photorealism Enhancement Stephan R. Richter, Hassan Abu AlHaija, Vladlen Koltun Paper | Website (with side-by-side comparisons) | Video (Pap

Intelligent Systems Lab Org 1.1k Dec 31, 2022
Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image.

Deep Illuminator Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide

George Chogovadze 52 Nov 29, 2022
Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.

deep_autoviml Build keras pipelines and models in a single line of code! Table of Contents Motivation How it works Technology Install Usage API Image

AutoViz and Auto_ViML 102 Dec 17, 2022
PushForKiCad - AISLER Push for KiCad EDA

AISLER Push for KiCad Push your layout to AISLER with just one click for instant

AISLER 31 Dec 29, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
SpecAugmentPyTorch - A Pytorch (support batch and channel) implementation of GoogleBrain's SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition

SpecAugment An implementation of SpecAugment for Pytorch How to use Install pytorch, version=1.9.0 (new feature (torch.Tensor.take_along_dim) is used

IMLHF 3 Oct 11, 2022
A Fast Knowledge Distillation Framework for Visual Recognition

FKD: A Fast Knowledge Distillation Framework for Visual Recognition Official PyTorch implementation of paper A Fast Knowledge Distillation Framework f

Zhiqiang Shen 129 Dec 24, 2022
SAPIEN Manipulation Skill Benchmark

ManiSkill Benchmark SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill, pronounced as "Many Skill") is a large-scale learning-from-demonstr

Hao Su's Lab, UCSD 107 Jan 08, 2023
Deep learning with TensorFlow and earth observation data.

Deep Learning with TensorFlow and EO Data Complete file set for Jupyter Book Autor: Development Seed Date: 04 October 2021 ISBN: (to come) Notebook tu

Development Seed 20 Nov 16, 2022
Fully Adaptive Bayesian Algorithm for Data Analysis (FABADA) is a new approach of noise reduction methods. In this repository is shown the package developed for this new method based on \citepaper.

Fully Adaptive Bayesian Algorithm for Data Analysis FABADA FABADA is a novel non-parametric noise reduction technique which arise from the point of vi

18 Oct 20, 2022
Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions

Aquarius Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions NOTE: We are currently going through the open-source process requir

Zhiyuan YAO 0 Jun 02, 2022