Continual World is a benchmark for continual reinforcement learning

Overview

Continual World

Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld.

The core of our benchmark is CW20 sequence, in which 20 tasks are run, each with budget of 1M steps.

We provide the complete source code for the benchmark together with the tested algorithms implementations and code for producing result tables and plots.

See also the paper and the website.

CW20 sequence

Installation

You can either install directly in Python environment (like virtualenv or conda), or build containers -- Docker or Singularity.

Standard installation (directly in environment)

First, you'll need MuJoCo simulator. Please follow the instructions from mujoco_py package. As MuJoCo has been made freely available, you can obtain a free license here.

Next, go to the main directory of this repo and run

pip install .

Alternatively, if you want to install in editable mode, run

pip install -e .

Docker image

  • To build the image with continualworld package installed inside, run docker build . -f assets/Dockerfile -t continualworld

  • To build the image WITHOUT the continualworld package but with all the dependencies installed, run docker build . -f assets/Dockerfile -t continualworld --build-arg INSTALL_CW_PACKAGE=false

When the image is ready, you can run

docker run -it continualworld bash

to get inside the image.

Singularity image

  • To build the image with continualworld package installed inside, run singularity build continualworld.sif assets/singularity.def

  • To build the image WITHOUT the continualworld package but with all the dependencies installed, run singularity build continualworld.sif assets/singularity_only_deps.def

When the image is ready, you can run

singularity shell continualworld.sif

to get inside the image.

Running

You can run single task, continual learning or multi-task learning experiments with run_single.py, run_cl.py , run_mt.py scripts, respectively.

To see available script arguments, run with --help option, e.g.

python3 run_single.py --help

Examples

Below are given example commands that will run experiments with a very limited scale.

Single task

python3 run_single.py --seed 0 --steps 2e3 --log_every 250 --task hammer-v1 --logger_output tsv tensorboard

Continual learning

python3 run_cl.py --seed 0 --steps_per_task 2e3 --log_every 250 --tasks CW20 --cl_method ewc --cl_reg_coef 1e4 --logger_output tsv tensorboard

Multi-task learning

python3 run_mt.py --seed 0 --steps_per_task 2e3 --log_every 250 --tasks CW10 --use_popart True --logger_output tsv tensorboard

Reproducing the results from the paper

Commands to run experiments that reproduce main results from the paper can be found in examples/paper_cl_experiments.sh, examples/paper_mt_experiments.sh and examples/paper_single_experiments.sh. Because of number of different runs that these files contain, it is infeasible to just run it in sequential manner. We hope though that these files will be helpful because they precisely specify what needs to be run.

After the logs from runs are gathered, you can produce tables and plots - see the section below.

Producing result tables and plots

After you've run experiments and you have saved logs, you can run the script to produce result tables and plots:

python produce_results.py --cl_logs examples/logs/cl --mtl_logs examples/logs/mtl --baseline_logs examples/logs/baseline

In this command, respective arguments should be replaced for paths to directories containing logs from continual learning experiments, multi-task experiments and baseline (single-task) experiments. Each of these should be a directory inside which there are multiple experiments, for different methods and/or seeds. You can see the directory structure in the example logs included in the command above.

Results will be produced and saved on default to the results directory.

Alternatively, check out nb_produce_results.ipynb notebook to see plots and tables in the notebook.

Download our saved logs and produce results

You can download logs of experiments to reproduce paper's results from here. Then unzip the file and run

python produce_results.py --cl_logs saved_logs/cl --mtl_logs saved_logs/mt --baseline_logs saved_logs/single

to produce tables and plots.

As a result, a csv file with results will be produced, as well as the plots, like this one (and more!):

average performance

Full output can be found here.

Acknowledgements

Continual World heavily relies on MetaWorld.

The implementation of SAC used in our code comes from Spinning Up in Deep RL.

Our research was supported by the PLGrid infrastructure.

Our experiments were managed using Neptune.

Transformer based SAR image despeckling

Transformer based SAR image despeckling Using the code: The code is stable while using Python 3.6.13, CUDA =10.1 Clone this repository: git clone htt

27 Nov 13, 2022
QMagFace: Simple and Accurate Quality-Aware Face Recognition

Quality-Aware Face Recognition 26.11.2021 start readme QMagFace: Simple and Accurate Quality-Aware Face Recognition Research Paper Implementation - To

Philipp Terhörst 59 Jan 04, 2023
A mini-course offered to Undergrad chemistry students

The best way to use this material is by forking it by click the Fork button at the top, right corner. Then you will get your own copy to play with! Th

Raghu 19 Dec 19, 2022
[Link]deep_portfolo - Use Reforcemet earg ad Supervsed learg to Optmze portfolo allocato []

rl_portfolio This Repository uses Reinforcement Learning and Supervised learning to Optimize portfolio allocation. The goal is to make profitable agen

Deepender Singla 165 Dec 02, 2022
Contenido del curso Bases de datos del DCC PUC versión 2021-2

IIC2413 - Bases de Datos Tabla de contenidos Equipo Profesores Ayudantes Contenidos Calendario Evaluaciones Resumen de notas Foro Política de integrid

54 Nov 23, 2022
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

Qibin (Andrew) Hou 726 Jan 05, 2023
Leveraging OpenAI's Codex to solve cornerstone problems in Music

Music-Codex Leveraging OpenAI's Codex to solve cornerstone problems in Music Please NOTE: Presented generated samples were created by OpenAI's Codex P

Alex 2 Mar 11, 2022
Omnidirectional camera calibration in python

Omnidirectional Camera Calibration Key features pure python initial solution based on A Toolbox for Easily Calibrating Omnidirectional Cameras (Davide

Thomas Pönitz 12 Nov 22, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
(under submission) Bayesian Integration of a Generative Prior for Image Restoration

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration Authors: Majed El Helou, and Sabine Süsstrunk {Note: p

Majed El Helou 22 Dec 17, 2022
From Perceptron model to Deep Neural Network from scratch in Python.

Neural-Network-Basics Aim of this Repository: From Perceptron model to Deep Neural Network (from scratch) in Python. ** Currently working on a basic N

Aditya Kahol 1 Jan 14, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
Parasite: a tool allowing you to compress and decompress files, to reduce their size

🦠 Parasite 🦠 Parasite is a tool written in Python3 allowing you to "compress" any file, reducing its size. ⭐ Features ⭐ + Fast + Good optimization,

Billy 30 Nov 25, 2022
Planner_backend - Academic planner application designed for students and counselors.

Planner (backend) Academic planner application designed for students and advisors.

2 Dec 31, 2021
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

IDSIA 36 Nov 15, 2022
Just-Now - This Is Just Now Login Friendlist Cloner Tools

JUST NOW LOGIN FRIENDLIST CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 21 Mar 09, 2022
Using OpenAI's CLIP to upscale and enhance images

CLIP Upscaler and Enhancer Using OpenAI's CLIP to upscale and enhance images Based on nshepperd's JAX CLIP Guided Diffusion v2.4 Sample Results Viewpo

Tripp Lyons 5 Jun 14, 2022
Code for “ACE-HGNN: Adaptive Curvature ExplorationHyperbolic Graph Neural Network”

ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network This repository is the implementation of ACE-HGNN in PyTorch. Environment pyt

9 Nov 28, 2022
RANZCR-CLiP 7th Place Solution

RANZCR-CLiP 7th Place Solution This repository is WIP. (18 Mar 2021) Installation git clone https://github.com/analokmaus/kaggle-ranzcr-clip-public.gi

Hiroshechka Y 21 Oct 22, 2022
neural image generation

pixray Pixray is an image generation system. It combines previous ideas including: Perception Engines which uses image augmentation and iteratively op

dribnet 398 Dec 17, 2022