Dynamic Environments with Deformable Objects (DEDO)

Related tags

Deep Learningdedo
Overview

DEDO  - Dynamic Environments with Deformable Objects

DEDO - Dynamic Environments with Deformable Objects

DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed for researchers in the machine learning, reinforcement learning, robotics and computer vision communities. The suite provides a set of every day tasks that involve deformables, such as hanging cloth, dressing a person, and buttoning buttons. We provide examples for integrating two popular reinforcement learning libraries: StableBaselines3 and RLlib. We also provide reference implementaionts for training a various Variational Autoencoder variants with our environment. DEDO is easy to set up and has few dependencies, it is highly parallelizable and supports a wide range of customizations: loading custom objects and textures, adjusting material properties.

<<<<<<< HEAD

Note: updates for this repo are in progress (until the presentation at NeurIPS2021 in mid-December).

@inproceedings{dedo2021,
  title={Dynamic Environments with Deformable Objects},
  author={Rika Antonova and Peiyang Shi and Hang Yin and Zehang Weng and Danica Kragic},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year={2021},
}

d221b6994e8189457ea6f0513e6807824d11bb29 Table of Contents:
Installation
GettingStarted
Tasks
Use with RL
Use with VAE
Customization

Please refer to Wiki for the full documentation

Installation

Optional initial step: create a new conda environment with conda create --name dedo python=3.7 and activate it with conda activate dedo. Conda is not strictly needed, alternatives like virtualenv can be used; a direct install without using virtual environments is ok as well.

git clone https://github.com/contactrika/dedo
cd dedo
pip install numpy  # important: Nessasary for compiling numpy-enabled PyBullet
pip install -e .

Python3.7 is recommended as we have encountered that on some OS + CPU combo, PyBullet could not be compiled with Numpy enabled in Pip Python 3.8. To enable recording/logging videos install ffmpeg:

sudo apt-get install ffmpeg

See more in Installation Guide in wiki

Getting started

To get started, one can run one of the following commands to visualize the tasks through a hard-coded policy.

python -m dedo.demo --env=HangGarment-v1 --viz --debug
  • dedo.demo is the demo module
  • --env=HangGarment-v1 specifies the environment
  • --viz enables the GUI
  • ---debug outputs additional information in the console
  • --cam_resolution 400 specifies the size of the output window

See more in Usage-guide

Tasks

See more in Task Overview

We provide a set of 10 tasks involving deformable objects, most tasks contains 5 handmade deformable objects. There are also two procedurally generated tasks, ButtonProc and HangProcCloth, in which the deformable objects are procedurally generated. Furthermore, to improve generalzation, the v0 of each task will randomizes textures and meshes.

All tasks have -v1 and -v2 with a particular choice of meshes and textures that is not randomized. Most tasks have versions up to -v5 with additional mesh and texture variations.

Tasks with procedurally generated cloth (ButtonProc and HangProcCloth) generate random cloth objects for all versions (but randomize textures only in v0).

HangBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangBag-v1 --viz

HangBag-v0: selects one of 108 bag meshes; randomized textures

HangBag-v[1-3]: three bag versions with textures shown below:

images/imgs/hang_bags_annotated.jpg

HangGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangGarment-v1 --viz

HangGarment-v0: hang garment with randomized textures (a few examples below):

HangGarment-v[1-5]: 5 apron meshes and texture combos shown below:

images/imgs/hang_garments_5.jpg

HangGarment-v[6-10]: 5 shirt meshes and texture combos shown below:

images/imgs/hang_shirts_5.jpg

HangProcCloth

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangProcCloth-v1 --viz

HangProcCloth-v0: random textures, procedurally generated cloth with 1 and 2 holes.

HangProcCloth-v[1-2]: same, but with either 1 or 2 holes

images/imgs/hang_proc_cloth.jpg

Buttoning

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Button-v1 --viz

ButtonProc-v0: randomized textures and procedurally generated cloth with 2 holes, randomized hole/button positions.

ButtonProc-v[1-2]: procedurally generated cloth, 1 or two holes.

images/imgs/button_proc.jpg

Button-v0: randomized textures, but fixed cloth and button positions.

Button-v1: fixed cloth and button positions with one texture (see image below):

images/imgs/button.jpg

Hoop

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Hoop-v1 --viz

Hoop-v0: randomized textures Hoop-v1: pre-selected textures images/imgs/hoop_and_lasso.jpg

Lasso

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Lasso-v1 --viz

Lasso-v0: randomized textures Lasso-v1: pre-selected textures

DressBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressBag-v1 --viz

DressBag-v0, DressBag-v[1-5]: demo for -v1 shown below

images/imgs/dress_bag.jpg

Visualizations of the 5 backpack mesh and texture variants for DressBag-v[1-5]:

images/imgs/backpack_meshes.jpg

DressGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressGarment-v1 --viz

DressGarment-v0, DressGarment-v[1-5]: demo for -v1 shown below

images/imgs/dress_garment.jpg

Mask

python -m dedo.demo_preset --env=Mask-v1 --viz

Mask-v0, Mask-v[1-5]: a few texture variants shown below: images/imgs/dress_garment.jpg

RL Examples

dedo/run_rl_sb3.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm using RLLib:

python -m dedo.run_rllib --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

For documentation, please refer to Arguments Reference page in wiki

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

SVAE Examples

dedo/run_svae.py gives an example of how to train various flavors of VAE:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

Customization

To load custom object you would first have to fill an entry in DEFORM_INFO in task_info.py. The key should the the .obj file path relative to data/:

DEFORM_INFO = {
...
    # An example of info for a custom item.
    'bags/custom.obj': {
        'deform_init_pos': [0, 0.47, 0.47],
        'deform_init_ori': [np.pi/2, 0, 0],
        'deform_scale': 0.1,
        'deform_elastic_stiffness': 1.0,
        'deform_bending_stiffness': 1.0,
        'deform_true_loop_vertices': [
            [0, 1, 2, 3]  # placeholder, since we don't know the true loops
        ]
    },

Then you can use --override_deform_obj flag:

python -m dedo.demo --env=HangBag-v0 --cam_resolution 200 --viz --debug \
    --override_deform_obj bags/custom.obj

For items not in DEFORM_DICT you will need to specify sensible defaults, for example:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
  --override_deform_obj=generated_cloth/generated_cloth.obj \
  --deform_init_pos 0.02 0.41 0.63 --deform_init_ori 0 0 1.5708

Example of scaling up the custom mesh objects:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
   --override_deform_obj=generated_cloth/generated_cloth.obj \
   --deform_init_pos 0.02 0.41 0.55 --deform_init_ori 0 0 1.5708 \
   --deform_scale 2.0 --anchor_init_pos -0.10 0.40 0.70 \
   --other_anchor_init_pos 0.10 0.40 0.70

See more in Customization Wiki

Additonal Assets

BGarment dataset is adapter from Berkeley Garment Library

Sewing dataset is adapted from Generating Datasets of 3D Garments with Sewing Patterns

You might also like...
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

PyTorch implementation of Deformable Convolution
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

MoCoPnet - Deformable 3D Convolution for Video Super-Resolution
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

Selfplay In MultiPlayer Environments
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Comments
  • Adding Point Cloud Observations to DEDO

    Adding Point Cloud Observations to DEDO

    This PR adds point cloud (pcd) rendering to the DEDO. Summary of changes:

    • Point cloud data extracted from sim environment based on a set of object ids that we want to retain
    • Depth cameras are instantiated using a cameraConfig class, which abstracts out the various camera configurations needed.
    • The cameraConfig class loads camera configs from JSON (for easy loading & sharing of camera configs), or directly by instantiation (if you know how you want to dynamically set your camera).
    • Some sample JSON camera configs are provided (4 total)
    • Unprojecting from depth image to point cloud is vectorized, so rendering point cloud observations adds negligible runtime to overall pipeline process time (should benchmark this?).
    • The original deform_env had to be adjusted so that the deformable object would have ID 0. For some reason, pybullet only renders the deformable if this is true.

    Known issues:

    • The floor has disappeared from the visual.
    opened by edwin-pan 3
  • Enables base motion on fetch robot with 1 anchor

    Enables base motion on fetch robot with 1 anchor

    Changes allow the fetch robot to move towards the hanger with an apron.

    Google Doc that explains the changes: https://docs.google.com/document/d/18_9K29K4N6atvtqUxIqKhq6Bt0YSPhQgfWldUWdyvLM/edit?usp=sharing

    There are some TODO's: related to removing some hardcoded values and improving the results

    opened by Nishantjannu 0
Releases(v0.1)
  • v0.1(Jan 11, 2022)

    This is the initial release of the code and functionality presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks in December 2021.

    Source code(tar.gz)
    Source code(zip)
Owner
Rika
Sim-to-real with Reinforcement Learning, Variational Inference, Bayesian Optimization
Rika
Developing your First ML Workflow of the AWS Machine Learning Engineer Nanodegree Program

Exercises and project documentation for the 3. Developing your First ML Workflow of the AWS Machine Learning Engineer Nanodegree Program

Simona Mircheva 1 Jan 13, 2022
PyTorch 1.0 inference in C++ on Windows10 platforms

Serving PyTorch Models in C++ on Windows10 platforms How to use Prepare Data examples/data/train/ - 0 - 1 . . . - n examples/data/test/

Henson 88 Oct 15, 2022
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

14 Nov 18, 2022
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

zhanglabNKU 1 Nov 29, 2022
Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks]

Neural Architecture Search for Spiking Neural Networks Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks] (https

Intelligent Computing Lab at Yale University 28 Nov 18, 2022
CCP dataset from Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing (CCP) Dataset Clothing Co-Parsing (CCP) dataset is a new clothing database including elaborately annotated clothing items. 2, 098

Wei Yang 434 Dec 24, 2022
On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation

On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation On Nonlinear Latent Transformations for GAN-based Image Editi

Valentin Khrulkov 22 Oct 24, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | 简体中文 This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 07, 2023
A simple Neural Network that predicts the label for a series of handwritten digits

Neural_Network A simple Neural Network that predicts the label for a series of handwritten numbers This program tries to predict the label (1,2,3 etc.

Ty 1 Dec 18, 2021
[CVPR 2021] 'Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator'

[CVPR2021] Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator Overview This is the entire codebase for the paper

35 Dec 01, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
Sibur challange 2021 competition - 6 place

sibur challange 2021 Решение на 6 место: https://sibur.ai-community.com/competitions/5/tasks/13 Скор 1.4066/1.4159 public/private. Архитектура - однос

Ivan 5 Jan 11, 2022
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (CVPR 2021, oral presentation) CoCosNet v2: Full-Resolution Correspondence

Microsoft 308 Dec 07, 2022
Official implementation for the paper: Multi-label Classification with Partial Annotations using Class-aware Selective Loss

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
A simple python stock Predictor

Python Stock Predictor A simple python stock Predictor Demo Run Locally Clone the project git clone https://github.com/yashraj-n/stock-price-predict

Yashraj narke 5 Nov 29, 2021
Implementation of Nyström Self-attention, from the paper Nyströmformer

Nyström Attention Implementation of Nyström Self-attention, from the paper Nyströmformer. Yannic Kilcher video Install $ pip install nystrom-attention

Phil Wang 95 Jan 02, 2023
ChainerRL is a deep reinforcement learning library built on top of Chainer.

ChainerRL and PFRL ChainerRL (this repository) is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement al

Chainer 1.1k Jan 01, 2023
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Kai Zhang 1.2k Dec 29, 2022
A simple baseline for 3d human pose estimation in PyTorch.

3d_pose_baseline_pytorch A PyTorch implementation of a simple baseline for 3d human pose estimation. You can check the original Tensorflow implementat

weigq 312 Jan 06, 2023