Code for: https://berkeleyautomation.github.io/bags/

Overview

DeformableRavens

Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the project website, which also contains the data we used to train policies. Contents of this README:

Installation

This is how to get the code running on a local machine. First, get conda on the machine if it isn't there already:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Then, create a new Python 3.7 conda environment (e.g., named "py3-defs") and activate it:

conda create -n py3-defs python=3.7
conda activate py3-defs

Then install:

./install_python_ubuntu.sh

Note I: It is tested on Ubuntu 18.04. We have not tried other Ubuntu versions or other operating systems.

Note II: Installing TensorFlow using conda is usually easier than pip because the conda version will ship with the correct CUDA and cuDNN libraries, whereas the pip version is a nightmare regarding version compatibility.

Note III: the code has only been tested with PyBullet 3.0.4. In fact, there are some places which explicitly hard-code this requirement. Using later versions may work but is not recommended.

Environments and Tasks

This repository contains tasks in the ICRA 2021 submission and the predecessor paper on Transporters (presented at CoRL 2020). For the latter paper, there are (roughly) 10 tasks that came pre-shipped; the Transporters paper doesn't test with pushing or insertion-translation, but tests with all others. See Tasks.md for some task-specific documentation

Each task subclasses a Task class and needs to define its own reset(). The Task class defines an oracle policy that's used to get demonstrations (so it is not implemented within each task subclass), and is divided into cases depending on the action, or self.primitive, used.

Similarly, different tasks have different reward functions, but all are integrated into the Task super-class and divided based on the self.metric type: pose or zone.

Code Usage

Experiments start with python main.py, with --disp added for seeing the PyBullet GUI (but not used for large-scale experiments). The general logic for main.py proceeds as follows:

  • Gather expert demonstrations for the task and put it in data/{TASK}, unless there are already a sufficient amount of demonstrations. There are sub-directories for action, color, depth, info, etc., which store the data pickle files with consistent indexing per time step. Caution: this will start "counting" the data from the existing data/ directory. If you want entirely fresh data, delete the relevant file in data/.

  • Given the data, train the designated agent. The logged data is stored in logs/{AGENT}/{TASK}/{DATE}/{train}/ in the form of a tfevent file for TensorBoard. Note: it will do multiple training runs for statistical significance.

For deformables, we actually use a separate load.py script, due to some issues with creating multiple environments.

See Commands.md for commands to reproduce experimental results.

Downloading the Data

We normally generate 1000 demos for each of the tasks. However, this can take a long time, especially for the bag tasks. We have pre-generated datasets for all the tasks we tested with on the project website. Here's how to do this. For example, suppose we want to download demonstration data for the "bag-color-goal" task. Download the demonstration data from the website. Since this is also a goal-conditioned task, download the goal demonstrations as well. Make new data/ and goals/ directories and put the tar.gz files in the respective directories:

deformable-ravens/
    data/
        bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
    goals/
        bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Note: if you generate data using the main.py script, then it will automatically create the data/ scripts, and similarly for the generate_goals.py script. You only need to manually create data/ and goals/ if you only want to download and get pre-existing datasets in the right spot.

Then untar both of them in their respective directories:

tar -zxvf bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
tar -zxvf bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Now the data should be ready! If you want to inspect and debug the data, for example the goals data, then do:

python ravens/dataset.py --path goals/bag-color-goal/

Note that by default it saves any content in goals/ to goals_out/ and data in data/ to data_out/. Also, by default, it will download and save images. This can be very computationally intensive if you do this for the full 1000 demos. (The goals/ data only has 20 demos.) You can change this easily in the main method of ravens/datasets.py.

Running the script will print out some interesting data statistics for you.

Miscellaneous

If you have questions, please use the public issue tracker, so that all of us can benefit from your questions.

If you find this code or research paper helpful, please consider citing it:

@inproceedings{seita_bags_2021,
    author  = {Daniel Seita and Pete Florence and Jonathan Tompson and Erwin Coumans and Vikas Sindhwani and Ken Goldberg and Andy Zeng},
    title   = {{Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks}},
    journal = {arXiv preprint arXiv:2012.03385},
    Year    = {2020}
}
Owner
Daniel Seita
Computer science Ph.D. student at UC Berkeley working in Artificial Intelligence.
Daniel Seita
Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

47 Nov 22, 2022
MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving

MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving Code will be available soon. Motivation Architecture

Kai Chen 24 Apr 19, 2022
Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch]

Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch] Abstract Snapshot compressive imaging (SCI) can rec

integirty 6 Nov 01, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Official implementation of "CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding" (CVPR, 2022)

CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding (CVPR'22) Paper Link | Project Page Abstract : Manual an

Mohamed Afham 152 Dec 23, 2022
Pointer-generator - Code for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Networks

Note: this code is no longer actively maintained. However, feel free to use the Issues section to discuss the code with other users. Some users have u

Abi See 2.1k Jan 04, 2023
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
Face recognition project by matching the features extracted using SIFT.

MV_FaceDetectionWithSIFT Face recognition project by matching the features extracted using SIFT. By : Aria Radmehr Professor : Ali Amiri Dependencies

Aria Radmehr 4 May 31, 2022
Image Recognition using Pytorch

PyTorch Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot practice and contributing in

Sarat Chinni 1 Nov 02, 2021
Harmonic Memory Networks for Graph Completion

HMemNetworks Code and documentation for Harmonic Memory Networks, a series of models for compositionally assembling representations of graph elements

mlalisse 0 Oct 27, 2021
This is official implementaion of paper "Token Shift Transformer for Video Classification".

This is official implementaion of paper "Token Shift Transformer for Video Classification". We achieve SOTA performance 80.40% on Kinetics-400 val. Paper link

VideoNet 60 Dec 30, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
A hybrid SOTA solution of LiDAR panoptic segmentation with C++ implementations of point cloud clustering algorithms. ICCV21, Workshop on Traditional Computer Vision in the Age of Deep Learning

ICCVW21-TradiCV-Survey-of-LiDAR-Cluster Motivation In contrast to popular end-to-end deep learning LiDAR panoptic segmentation solutions, we propose a

YimingZhao 103 Nov 22, 2022
An implementation of "Learning human behaviors from motion capture by adversarial imitation"

Merel-MoCap-GAIL An implementation of Merel et al.'s paper on generative adversarial imitation learning (GAIL) using motion capture (MoCap) data: Lear

Yu-Wei Chao 34 Nov 12, 2022
Voxel Transformer for 3D object detection

Voxel Transformer This is a reproduced repo of Voxel Transformer for 3D object detection. The code is mainly based on OpenPCDet. Introduction We provi

173 Dec 25, 2022
Chinese Mandarin tts text-to-speech 中文 (普通话) 语音 合成 , by fastspeech 2 , implemented in pytorch, using waveglow as vocoder,

Chinese mandarin text to speech based on Fastspeech2 and Unet This is a modification and adpation of fastspeech2 to mandrin(普通话). Many modifications t

291 Jan 02, 2023
PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Pre

Facebook Research 437 Dec 23, 2022
Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition (AGRA, ACM 2020, Oral)

Cross Domain Facial Expression Recognition Benchmark Implementation of papers: Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchm

89 Dec 09, 2022
Simulating an AI playing 2048 using the Expectimax algorithm

2048-expectimax Simulating an AI playing 2048 using the Expectimax algorithm The base game engine uses code from here. The AI player is modeled as a m

Subha Ramesh 2 Jan 31, 2022
Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation (ICCV 2021)

Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation (ICCV 2021) [中文|EN] 概述 本工作主要探索一种高效的多传感器(激光雷达和摄像头)融合点云语义分割方法。现有的多传感器融合方法主要将点云投影

ICE 126 Dec 30, 2022