Isaac Gym Environments for Legged Robots

Related tags

Hardwarelegged_gym
Overview

Isaac Gym Environments for Legged Robots

This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym. It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.
Maintainer: Nikita Rudin
Affiliation: Robotic Systems Lab, ETH Zurich
Contact: [email protected]

Useful Links

Project website: https://leggedrobotics.github.io/legged_gym/ Paper: https://arxiv.org/abs/2109.11978

Installation

  1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
  2. Install pytorch 1.10 with cuda-11.3:
    • pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
  3. Install Isaac Gym
    • Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
    • cd isaacgym_lib/python && pip install -e .
    • Try running an example python examples/1080_balls_of_solitude.py
    • For troubleshooting check docs isaacgym/docs/index.html)
  4. Install rsl_rl (PPO implementation)
  5. Install legged_gym
    • Clone this repository
    • cd legged_gym && git checkout develop && pip install -e .

CODE STRUCTURE

  1. Each environment is defined by an env file (legged_robot.py) and a config file (legged_robot_config.py). The config file contains two classes: one conatianing all the environment parameters (LeggedRobotCfg) and one for the training parameters (LeggedRobotCfgPPo).
  2. Both env and config classes use inheritance.
  3. Each non-zero reward scale specified in cfg will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
  4. Tasks must be registered using task_registry.register(name, EnvClass, EnvConfig, TrainConfig). This is done in envs/__init__.py, but can also be done from outside of this repository.

Usage

  1. Train:
    python issacgym_anymal/scripts/train.py --task=anymal_c_flat
    • To run on CPU add following arguments: --sim_device=cpu, --rl_device=cpu (sim on CPU and rl on GPU is possible).
    • To run headless (no rendering) add --headless.
    • Important: To improve performance, once the training starts press v to stop the rendering. You can then enable it later to check the progress.
    • The trained policy is saved in issacgym_anymal/logs/ / _ /model_ .pt . Where and are defined in the train config.
    • The following command line arguments override the values set in the config files:
    • --task TASK: Task name.
    • --resume: Resume training from a checkpoint
    • --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
    • --run_name RUN_NAME: Name of the run.
    • --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
    • --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
    • --num_envs NUM_ENVS: Number of environments to create.
    • --seed SEED: Random seed.
    • --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
  2. Play a trained policy:
    python issacgym_anymal/scripts/play.py --task=anymal_c_flat
    • By default the loaded policy is the last model of the last run of the experiment folder.
    • Other runs/model iteration can be selected by setting load_run and checkpoint in the train config.

Adding a new environment

The base environment legged_robot implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and no reward scales.

  1. Add a new folder to envs/ with ' _config.py , which inherit from an existing environment cfgs
  2. If adding a new robot:
    • Add the corresponding assets to resourses/.
    • In cfg set the asset path, define body names, default_joint_positions and PD gains. Specify the desired train_cfg and the name of the environment (python class).
    • In train_cfg set experiment_name and run_name
  3. (If needed) implement your environment in .py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
  4. Register your env in isaacgym_anymal/envs/__init__.py.
  5. Modify/Tune other parameters in your cfg, cfg_train as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!

Troubleshooting

  1. If you get the following error: ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory, do: sudo apt install libpython3.8

Known Issues

  1. The contact forces reported by net_contact_force_tensor are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesireable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from trhe reported forces with sensor_options.enable_forward_dynamics_forces. Example:
    sensor_pose = gymapi.Transform()
    for name in feet_names:
        sensor_options = gymapi.ForceSensorProperties()
        sensor_options.enable_forward_dynamics_forces = False # for example gravity
        sensor_options.enable_constraint_solver_forces = True # for example contacts
        sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
        index = self.gym.find_asset_rigid_body_index(robot_asset, name)
        self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
    (...)

    sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
    self.gym.refresh_force_sensor_tensor(self.sim)
    force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
    self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
    (...)

    self.gym.refresh_force_sensor_tensor(self.sim)
    contact = self.sensor_forces[:, :, 2] > 1.
Owner
Robotic Systems Lab - Legged Robotics at ETH Zürich
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments.
Robotic Systems Lab - Legged Robotics at ETH Zürich
Designed a system that can efficiently sort recyclables and transfer them to corresponding bins using Python, a Raspberry Pi, and Quanser Labs.

System for Sorting and Recycling Containers - Project 3 Table of contents Overview The challenge Screenshot My process Built with Code snippets What I

Mit Patel 2 Dec 02, 2022
Electrolux Pure i9 robot vacuum integration for Home Assistant.

Home Assistant Pure i9 This repository integrates your Electrolux Pure i9 robot vacuum with the smart home platform Home Assistant. The integration co

Niklas Ekman 15 Dec 22, 2022
Smart Tech Automation Remote via Kinematics Gesture control for IoT devices

STARK Smart Tech Automation Remote via Kinematics Gesture control for IoT devices View Demo · Report Bug · Request Feature Table of Contents About The

Juseong (Joe) Kim 1 Jan 29, 2022
Custom component for Home Assistant that integrates Candy/Haier Wi-Fi washing machines (also known as Simply-Fi).

Candy Home Assistant component Custom component for Home Assistant that integrates Candy/Haier Wi-Fi washing machines (also known as Simply-Fi). This

Olivér Falvai 61 Dec 29, 2022
KIRI - Keyboard Interception, Remapping, and Injection using Raspberry Pi as an HID Proxy.

KIRI - Keyboard Interception, Remapping and Injection using Raspberry Pi as a HID Proxy. Near limitless abilities for a keyboard warrior. Features Sim

Viggo Falster 10 Dec 23, 2022
A Macropad using the Raspberry Pi Pico, programmed with CircuitPython.

A Macropad using the Raspberry Pi Pico, programmed with CircuitPython.

15 Oct 14, 2022
Home-Assistant MQTT bridge for Panasonic Comfort Cloud

Panasonic Comfort Cloud MQTT Bridge Home-Assistant MQTT bridge for Panasonic Comfort Cloud. Note: Currently this brige is a one evening prototype proj

Santtu Järvi 2 Jan 04, 2023
FERM: A Framework for Efficient Robotic Manipulation

Framework for Efficient Robotic Manipulation FERM is a framework that enables robots to learn tasks within an hour of real time training.

Ruihan (Philip) Zhao 111 Dec 31, 2022
Hardware: CTWingSKIT_BC28 Development Toolkit

IoT Portal Monitor Tools hardware: CTWingSKIT_BC28 Development Toolkit serial port driver: ST-LINK hardware development environment: Keli 5 MDK IoT pl

Fengming Zhang 1 Nov 07, 2021
PyLog - Simple keylogger that uses pynput to listen to keyboard input.

Simple keylogger that uses pynput to listen to keyboard input. Outputs to a text file and the terminal. Press the escape key to stop.

1 Dec 29, 2021
Examples to accompany the

Examples to accompany the "Raspberry Pi Pico Python SDK" book published by Raspberry Pi Trading, which forms part of the technical documentation in support of Raspberry Pi Pico and the MicroPython po

Raspberry Pi 589 Jan 08, 2023
Get input from OLED Joystick, Runs command, Displays output on OLED Screen (Great for P4wnP1)

p4wnsolo-joyterm Gets text input from OLED Joystick Runs the command you typed Displays output on OLED Screen (Great for P4wnP1 - even better on Raspb

PawnSolo 7 Dec 19, 2022
Python information display framework aimed at e-ink devices

My display, using a Raspberry Pi Zero W and Waveshare 6" e-paper hat infodisplay Modular information display framework aimed at e-ink devices. Built u

Niek Blankers 3 Apr 08, 2022
OctoPrint is the snappy web interface for your 3D printer!

OctoPrint OctoPrint provides a snappy web interface for controlling consumer 3D printers. It is Free Software and released under the GNU Affero Genera

OctoPrint 7.1k Jan 03, 2023
A python script to poll RPi GPIO pins and subscribe and publish their state via MQTT

MQTT-GPIO A python script to poll RPi GPIO pins and subscribe and publish their state via MQTT using TLS. This script is short and meant to be edited

23 Oct 12, 2021
A set of postprocessing scripts and macro to accelerate the gyroid infill print speed with Klipper

A set of postprocessing scripts and macro to accelerate the gyroid infill print speed with Klipper

Jérôme W. 75 Jan 07, 2023
Tools and documentation to aid in modifying the ADI ADALM Pluto firmware

Pluto firmware modifications This repository contains tools and documentation to aid in modifying the ADI ADALM Pluto firmware. Extraction of the Plut

Daniel Estévez 28 Dec 21, 2022
Robo Arm :: Rigging is a rigging addon for Blender that helps animating industrial robotic arms.

Robo Arm :: Rigging Robo Arm :: Rigging is a rigging addon for Blender that helps animating industrial robotic arms. It construct serial links(a kind

2 Nov 18, 2021
Setup DevTerm to be a cool non-GUI device

DevTerm hobby project I bought this amazing device: DevTerm A-0604. It has a beefy ARM processor, runs a custom version of Armbian, embraces Open Sour

Alex Shteinikov 9 Nov 17, 2022
Volkswagen ID component for Home Assistant

Volkswagen ID component for Home Assistant This folder contains both a generic Python 3 library for the Volkswagen ID API and a component for Home Ass

55 Jan 07, 2023