Planning from Pixels in Environments with Combinatorially Hard Search Spaces -- NeurIPS 2021

Related tags

Deep LearningPPGS
Overview

PPGS: Planning from Pixels in Environments with Combinatorially Hard Search Spaces

PPGS Overview

Environment Setup

  • We recommend pipenv for creating and managing virtual environments (dependencies for other environment managers can be found in Pipfile)
git clone https://github.com/martius-lab/PPGS
cd ppgs
pipenv install
pipenv shell
  • For simplicity, this codebase is ready for training on two of the three environments (IceSlider and DigitJump). They are part of the puzzlegen package, which we provide here, and can be simply installed with
pip install -e https://github.com/martius-lab/puzzlegen
  • Offline datasets can be generated for training and validation. In the case of IceSlider we can use
python -m puzzlegen.extract_trajectories --record-dir /path/to/train_data --env-name ice_slider --start-level 0 --number-levels 1000 --max-steps 20 --n-repeat 20 --random 1
python -m puzzlegen.extract_trajectories --record-dir /path/to/test_data --env-name ice_slider --start-level 1000 --number-levels 1000 --max-steps 20 --n-repeat 5 --random 1
  • Finally, we can add the paths to the extracted datasets in default_params.json as data_params.train_path and data_params.test_path. We should also set the name of the environment for validation in data_params.env_name ("ice_slider" for IceSlider or "digit_jump" for DigitJump).

  • Training and evaluation are performed sequentially by running

python main.py

Configuration

All settings can be handled by editing default_config.json.

Param Default Info
optimizer_params.eps 1e-05 epsilon for Adam
train_params.seed null seed for training
train_params.epochs 40 # of training epochs
train_params.batch_size 128 batch size for training
train_params.save_every_n_epochs 5 how often to save models
train_params.val_every_n_epochs 2 how often to perform validation
train_params.lr_dict - dictionary of learning rates for each component
train_params.loss_weight_dict - dictionary of weights for the three loss functions
train_params.margin 0.1 latent margin epsilon
train_params.hinge_params - hyperparameters for margin loss
train_params.schedule [] learning rate schedule
model_params.name 'ppgs' name of the model to train in ['ppgs', 'latent']
model_params.load_model true whether to load saved model if present
model_params.filters [64, 128, 256, 512] encoder filters
model_params.embedding_size 16 dimensionality of latent space
model_params.normalize true whether to normalize embeddings
model_params.forward_layers 3 layers in MLP forward model for 'latent' world model
model_params.forward_units 256 units in MLP forward model for 'latent' world model
model_params.forward_ln true layer normalization in MLP forward model for 'latent' world model
model_params.inverse_layers 1 layers in MLP inverse model
model_params.inverse_units 32 units in MLP inverse model
model_params.inverse_ln true layer normalization in MLP inverse model
data_params.train_path '' path to training dataset
data_params.test_path '' path to validation dataset
data_params.env_name 'ice_slider' name of environment ('ice_slider' for IceSlider, 'digit_jump' for DigitJump
data_params.seq_len 2 number of steps for multi-step loss
data_params.shuffle true whether to shuffle datasets
data_params.normalize true whether to normalize observations
data_params.encode_position false enables positional encoding
data_params.env_params {} params to pass to environment
eval_params.evaluate_losses true whether to compute evaluation losses
eval_params.evaluate_rollouts true whether to compute solution rates
eval_params.eval_at [1,3,4] # of steps to evaluate at
eval_params.latent_eval_at [1,5,10] K for latent metrics
eval_params.seeds [2000] starting seed for evaluation levels
eval_params.num_levels 100 # evaluation levels
eval_params.batch_size 128 batch size for latent metrics evaluation
eval_params.planner_params.batch_size 256 cutoff for graph search
eval_params.planner_params.margin 0.1 latent margin for reidentification
eval_params.planner_params.early_stop true whether to stop when goal is found
eval_params.planner_params.backtrack false enables backtracking algorithm
eval_params.planner_params.penalize_visited false penalizes visited vertices in graph search
eval_params.planner_params.eps 0 enables epsilon greedy action selection
eval_params.planner_params.max_steps 256 maximal solution length
eval_params.planner_params.replan horizon 10 T_max for full planner
eval_params.planner_params.snap false snaps new vertices to visited ones
working_dir "results/ppgs" directory for checkpoints and results
Owner
Autonomous Learning Group
Autonomous Learning Group
Building blocks for uncertainty-aware cycle consistency presented at NeurIPS'21.

UncertaintyAwareCycleConsistency This repository provides the building blocks and the API for the work presented in the NeurIPS'21 paper Robustness vi

EML Tübingen 19 Dec 12, 2022
A deep-learning pipeline for segmentation of ambiguous microscopic images.

Welcome to Official repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images. Quick Start in 30 seconds se

Matthias Griebel 39 Dec 19, 2022
Adaptive Graph Convolution for Point Cloud Analysis

Adaptive Graph Convolution for Point Cloud Analysis This repository contains the implementation of AdaptConv for point cloud analysis. Adaptive Graph

64 Dec 21, 2022
A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes".

CoAtNet Overview This is a PyTorch implementation of CoAtNet specified in "CoAtNet: Marrying Convolution and Attention for All Data Sizes", arXiv 2021

Justin Wu 268 Jan 07, 2023
Benchmark for evaluating open-ended generation

OpenMEVA Contributed by Jian Guan, Zhexin Zhang. Thank Jiaxin Wen for DeBugging. OpenMEVA is a benchmark for evaluating open-ended story generation me

25 Nov 15, 2022
Repo público onde postarei meus estudos de Python, buscando aprender por meio do compartilhamento do aprendizado!

Seja bem vindo à minha repo de Estudos em Python 3! Este é um repositório criado por um programador amador que estuda tópicos de finanças, estatística

32 Dec 24, 2022
Hypernetwork-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels

Hypernet-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels The implementation of Hypernet-Ensemble Le

Sungmin Hong 6 Jul 18, 2022
3D-printable hand-strapped keyboard

Note: This repo has not been cleaned up and prepared for general consumption at all. This is just a dump of the project files. If there is any interes

Wojciech Baranowski 41 Dec 31, 2022
Source code for the plant extraction workflow introduced in the paper “Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-based Crop Images by Computer Vision”

Plant extraction workflow Source code for the plant extraction workflow introduced in the paper "Agricultural Plant Cataloging and Establishment of a

Maurice Günder 0 Apr 22, 2022
Fully-automated scripts for collecting AI-related papers

AI-Paper-collector Fully-automated scripts for collecting AI-related papers List of Conferences to crawel ACL: 21-19 (including findings) EMNLP: 21-19

Gordon Lee 776 Jan 08, 2023
2021搜狐校园文本匹配算法大赛 分比我们低的都是帅哥队

sohu_text_matching 2021搜狐校园文本匹配算法大赛Top2:分比我们低的都是帅哥队 本repo包含了本次大赛决赛环节提交的代码文件及答辩PPT,提交的模型文件可在百度网盘获取(链接:https://pan.baidu.com/s/1T9FtwiGFZhuC8qqwXKZSNA ,

hflserdaniel 43 Oct 01, 2022
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

🦕 nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 09, 2022
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning

Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning This repository is official Tensorflow implementation of paper: Ensemb

Seunghyun Lee 12 Oct 18, 2022
Texture mapping with variational auto-encoders

vae-textures This is an experiment with using variational autoencoders (VAEs) to perform mesh parameterization. This was also my first project using J

Alex Nichol 41 May 24, 2022
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems This repository is the official implementation of Rever

6 Aug 25, 2022
Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices

Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices Abstract For practical deep neural network design on mobile devices, it is e

11 Dec 30, 2022
Use unsupervised and supervised learning to predict stocks

AIAlpha: Multilayer neural network architecture for stock return prediction This project is meant to be an advanced implementation of stacked neural n

Vivek Palaniappan 1.5k Dec 26, 2022
ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

ALFRED A Benchmark for Interpreting Grounded Instructions for Everyday Tasks Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han,

ALFRED 204 Dec 15, 2022
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021) Dataset License This work is l

DongYoung Kim 33 Jan 04, 2023
PyTorch implementation of Tacotron speech synthesis model.

tacotron_pytorch PyTorch implementation of Tacotron speech synthesis model. Inspired from keithito/tacotron. Currently not as much good speech quality

Ryuichi Yamamoto 279 Dec 09, 2022