MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

Related tags

Deep Learningminihack
Overview

MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

MiniHack Environments

MiniHack is a sandbox framework for easily designing rich and diverse environments for Reinforcement Learning (RL). Based on the game of NetHack, arguably the hardest grid-based game in the world, MiniHack uses the NetHack Learning Environment (NLE) to communicate with the game and provide a convenient interface for customly created RL testbeds.

MiniHack already comes with a large list of challenging tasks. However, it is primarily built for easily designing new ones. The motivation behind MiniHack is to be able to perform RL experiments in a controlled setting while being able to increasingly scale the complexity of the tasks.

To this end, MiniHack leverages the description files of NetHack. The description files (or des-files) are human-readable specifications of levels: distributions of grid layouts together with monsters, objects on the floor, dungeon features, etc. The des-files can be compiled into binary using the NetHack level compiler, and MiniHack maps them to Gym environments. We refer users to our brief overview, detailed tutorial, or interactive notebook for further information on des-files.

Our documentation will walk you through everything you need to know about MiniHack, step-by-step, including information on how to get started, configure environments or design new ones, train baseline agents, and much more.

Installation

MiniHack is available on pypi and can be installed as follows:

pip install minihack

We advise using a conda environment for this:

conda create -n minihack python=3.8
conda activate minihack
pip install minihack

NOTE: NLE requires cmake>=3.15 to be installed when building the package. Check out here how to install it on MacOS and Ubuntu 18.04. Windows users should use Docker.

NOTE: Baseline agents have separate installation instructions. See here for more details.

Extending MiniHack

If you wish to extend MiniHack, please install the package as follows:

git clone https://github.com/facebookresearch/minihack
cd minihack
pip install -e ".[dev]"
pre-commit install

Docker

We have provided several Dockerfiles for building images with pre-installed MiniHack. Please follow the instructions described here.

Trying out MiniHack

MiniHack uses the popular Gym interface for the interactions between the agent and the environment. A pre-registered MiniHack environment can be used as follows:

import gym
import minihack
env = gym.make("MiniHack-River-v0")
env.reset() # each reset generates a new environment instance
env.step(1)  # move agent '@' north
env.render()

To see the list of all MiniHack environments, run:

python -m minihack.scripts.env_list

The following scripts allow to play MiniHack environments with a keyboard:

# Play the MiniHack in the Terminal as a human
python -m minihack.scripts.play --env MiniHack-River-v0

# Use a random agent
python -m minihack.scripts.play --env MiniHack-River-v0  --mode random

# Play the MiniHack with graphical user interface (gui)
python -m minihack.scripts.play_gui --env MiniHack-River-v0

NOTE: If the package has been properly installed one could run the scripts above with mh-envs, mh-play, and mh-guiplay commands.

Baseline Agents

In order to get started with MiniHack environments, we provide a variety of baselines agent integrations.

TorchBeast

A TorchBeast agent is bundled in minihack.agent.polybeast together with a simple model to provide a starting point for experiments. To install and train this agent, first install torchbeast by following the instructions here, then use the following commands:

pip install ".[polybeast]"
python -m minihack.agent.polybeast.polyhydra env=MiniHack-Room-5x5-v0 total_steps=100000

More information on running our TorchBeast agents, and instructions on how to reproduce the results of the paper, can be found here. The learning curves for all of our polybeast experiments can be accessed in our Weights&Biases repository.

RLlib

An RLlib agent is provided in minihack.agent.rllib, with a similar model to the torchbeast agent. This can be used to try out a variety of different RL algorithms. To install and train an RLlib agent, use the following commands:

pip install ".[rllib]"
python -m minihack.agent.rllib.train algo=dqn env=MiniHack-Room-5x5-v0 total_steps=1000000

More information on running RLlib agents can be found here.

Unsupervised Environment Design

MiniHack also enables research in Unsupervised Environment Design, whereby an adaptive task distribution is learned during training by dynamically adjusting free parameters of the task MDP. Check out the ucl-dark/paired repository for replicating the examples from the paper using the PAIRED.

Citation

If you use MiniHack in your work, please cite:

@inproceedings{samvelyan2021minihack,
  title={MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research},
  author={Mikayel Samvelyan and Robert Kirk and Vitaly Kurin and Jack Parker-Holder and Minqi Jiang and Eric Hambro and Fabio Petroni and Heinrich Kuttler and Edward Grefenstette and Tim Rockt{\"a}schel},
  booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
  year={2021},
  url={https://openreview.net/forum?id=skFwlyefkWJ}
}

If you use our example ported environments, please cite the original papers: MiniGrid (see license, bib), Boxoban (see license, bib).

Contributions and Maintenance

We welcome contributions to MiniHack. If you are interested in contributing, please see this document. Our maintenance plan can be found here.

Papers using the MiniHack

Open a pull request to add papers.

Comments
  • Manual pickup multiple items

    Manual pickup multiple items

    πŸ› Bug

    When autopickup=True the agent will attempt to pickup all the objects at a location. If I set autopickup=False, I can use the Command.PICKUP/, command to pickup the item, but if there are multiple items at that locations nothing happens. I don't see any message or prompt either. If this isn't a bug is there a work around?

    To Reproduce

    Steps to reproduce the behavior:

    1. Set autopickup=False
    2. Spawn two different items at a single spot
    3. Attempt to pickup using Command.PICKUP

    Expected behavior

    Nethack should return a prompt listing the objects available for pickup.

    Environment

    NLE version: 0.7.3
    PyTorch version: 1.10.0+cu113
    Is debug build: No
    CUDA used to build PyTorch: 11.3
    
    OS: Ubuntu 20.04.2 LTS
    GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
    CMake version: version 3.21.3
    
    Python version: 3.8
    Is CUDA available: Yes
    CUDA runtime version: Could not collect
    GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
    Nvidia driver version: 495.29.05
    cuDNN version: Could not collect
    
    Versions of relevant libraries:
    [pip3] numpy==1.21.2
    [pip3] torch==1.10.0+cu113
    [pip3] torchtext==0.11.0
    [conda] Could not collect
    

    Additional context

    Could be a problem with nle? If this isn't a bug is there a work around?

    bug 
    opened by kolbytn 6
  • [BUG] Error creating environment or running mh-play (Mac OSX 12.6)

    [BUG] Error creating environment or running mh-play (Mac OSX 12.6)

    πŸ› Bug

    Can't create environment or run play scripts in MacOSX 12.6

    To Reproduce

    Steps to reproduce the behavior:

    1. Install NLE 0.8.1 following workaround at https://github.com/facebookresearch/nle/issues/340
    2. pip install minihack
    3. mh-play leads to error: AttributeError: 'MiniHackRoom5x5Random' object has no attribute 'env'
    4. python -m minihack.scripts.play --env MiniHack-River-v0 --mode random leads to similar error: AttributeError: 'MiniHackRiver' object has no attribute 'env'

    Expected behavior

    Environment created successfully

    Environment

    MiniHack version: 0.1.2 NLE version: 0.8.1+103c667 Gym version: 0.21.0 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A

    OS: Mac OSX 12.6 GCC version: Could not collect CMake version: version 3.24.2

    Python version: 3.8 Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect

    Versions of relevant libraries: [pip3] numpy==1.23.3 [conda] Could not collect

    Additional context

    Used workaround for NLE install described here: https://github.com/facebookresearch/nle/issues/340 nle-play works as expected mh-env returns list of environments as expected. Same error encountered with python 3.9 & 3.10, and minihack version 0.1.3 (not tested all combinations)

    bug 
    opened by tmssmith 5
  • #64 issue solved: Fix access from base.py to nethack _vardir

    #64 issue solved: Fix access from base.py to nethack _vardir

    #64 issue solved: Fix access from base.py to nethack _vardir. I tested the modification on my local project. Seem that before this modification the library wasn't able to run because there was an access to self.env._vardir instead of self.nethack._vardir.

    CLA Signed core 
    opened by GeremiaPompei 3
  • [BUG] minihack.scripts.play don't work on Debian 11

    [BUG] minihack.scripts.play don't work on Debian 11

    πŸ› Bug

    After installing minihack+nle to Debian 11 the following commands work:

    import gym import minihack env = gym.make("MiniHack-River-v0") env.reset() # each reset generates a new environment instance env.step(1) # move agent '@' north env.render()

    But when running

    python3 -m minihack.scripts.play --env MiniHack-River-v0

    The program fails to errors.

    To Reproduce

    Steps to reproduce the behavior:

    1. Install Debian 11. Install minihack+nle (+deps) using pip install/apt-get commands.
    2. python3 -m minihack.scripts.play --env MiniHack-River-v0

    Error messages/traceback:

    Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.9/dist-packages/minihack/scripts/play.py", line 334, in main() File "/usr/local/lib/python3.9/dist-packages/minihack/scripts/play.py", line 330, in main play(**vars(flags)) File "/usr/local/lib/python3.9/dist-packages/minihack/scripts/play.py", line 123, in play print("Available actions:", env._actions) File "/home/optimus/.local/lib/python3.9/site-packages/gym/core.py", line 235, in getattr raise AttributeError( AttributeError: attempted to get missing private attribute '_actions'

    Expected behavior

    One should be able to play minihack using keyboard commands.

    Environment

    Collecting environment information...

    MiniHack version: 0.1.1 NLE version: 0.7.3 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A

    OS: Debian GNU/Linux 11 (bullseye) GCC version: (Debian 10.2.1-6) 10.2.1 20210110 CMake version: version 3.18.4

    Python version: 3.9 Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect

    Versions of relevant libraries: [pip3] msgpack-numpy==0.4.7.1 [pip3] numpy==1.19.5 [conda] Could not collect

    Additional context

    No Anaconda installed.

    bug 
    opened by cslr 3
  • Is it possible to generate pixel images (ideally cropped) in a desired resolution?

    Is it possible to generate pixel images (ideally cropped) in a desired resolution?

    Right now, I use opencv as

    env = gym.make(env, observation_keys=("pixel_crop",), penalty_step=0.0)
    obs_dict, reward, done, info = env.step(action)
    image = cv2.resize(obs_dict['pixel_crop'], dsize=(64, 64), interpolation=cv2.INTER_LINEAR)
    

    I'm wondering if it's possible to avoid this resizing by just directly rendering in the desired resolution.

    enhancement 
    opened by wcarvalho 2
  • [BUG] Broken monster generation from des file

    [BUG] Broken monster generation from des file

    πŸ› Bug

    I'm trying to generate different levels using des files. It works fine when I'm using just map. But MONSTER brakes env: instead of my map it returns some different random levels

    To Reproduce

    Steps to reproduce the behavior:

    1. Generate env with MONSTER
    2. Try to render it with get_des_file_rendering

    this generates random levels:

    from minihack.tiles.rendering import get_des_file_rendering
    import IPython.display
    def render_des_file(des_file, **kwargs):
        image = get_des_file_rendering(des_file, **kwargs)
        IPython.display.display(image)
    
    des = """
    MAZE: "mylevel", ' '
    FLAGS:premapped
    GEOMETRY:center,center
    
    MAP
    .....
    .....
    L....
    ..L..
    |....
    ENDMAP
    
    STAIR:(4, 4),down
    BRANCH: (0,0,0,0),(1,1,1,1)
    MONSTER:'v',"dust vortex",(0,4)
    """
    render_des_file(des, n_images=2, full_screen=False)
    

    this works ok:

    des = """
    MAZE: "mylevel", ' '
    FLAGS:premapped
    GEOMETRY:center,center
    
    MAP
    .....
    .....
    L....
    ..L..
    |....
    ENDMAP
    
    STAIR:(4, 4),down
    BRANCH: (0,0,0,0),(1,1,1,1)
    """
    render_des_file(des, n_images=2, full_screen=False)
    

    Expected behavior

    Env consists of map described is des file

    Environment

    Collecting environment information... MiniHack version: 0.1.3 NLE version: 0.8.1 Gym version: 0.21.0 PyTorch version: 1.12.0+cu113 Is debug build: No CUDA used to build PyTorch: 11.3

    OS: Ubuntu 18.04.5 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 CMake version: version 3.22.5

    Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 460.32.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 /usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5

    Versions of relevant libraries: [pip3] numpy==1.21.6 [pip3] torch==1.12.0+cu113 [pip3] torchaudio==0.12.0+cu113 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.13.0 [pip3] torchvision==0.13.0+cu113 [conda] Could not collect

    bug 
    opened by salamantos 2
  • [BUG] Inconsistent environment seeding

    [BUG] Inconsistent environment seeding

    πŸ› Bug

    Seeding doesn't consistently generate the same environment.

    To Reproduce

    Steps to reproduce the behavior:

    1. Run this snippet repeatedly:
    env = gym.make("MiniHack-KeyRoom-Fixed-S5-v0",
        observation_keys=("pixel", "colors", "chars", "glyphs", "tty_chars"),
        seeds=(42, 42, False))
    env.seed(42, 42, False)
    obs = env.reset()
    env.render()
    print(env.get_seeds())
    

    Sometimes this prints

    Hello Agent, welcome to NetHack!  You are a chaotic male human Rogue.           
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                           ----                                     
                                           |..|                                     
                                           +(.|                                     
                                        ----..|                                     
                                        |.....|                                     
                                        |...@.|                                     
                                        -------                                     
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
    Agent the Footpad              St:18/02 Dx:18 Co:13 In:8 Wi:9 Ch:7 Chaotic S:0  
    Dlvl:1 $:0 HP:12(12) Pw:2(2) AC:7 Xp:1/0                                        
    (42, 42, False)
    

    But also occasionally prints (note the printed seeds are (0, 0, False)):

    Hello Agent, welcome to NetHack!  You are a chaotic male human Rogue.           
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                           ----                                     
                                           |@.|                                     
                                           +..|                                     
                                           -..|                                     
                                            ..|                                     
                                            ..|                                     
                                           ----                                     
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
                                                                                    
    Agent the Footpad              St:14 Dx:18 Co:14 In:11 Wi:11 Ch:8 Chaotic S:0   
    Dlvl:1 $:0 HP:12(12) Pw:2(2) AC:7 Xp:1/0                                        
    (0, 0, False)
    

    Expected behavior

    Same positions of agent/key, and same seeds being printed by env.get_seeds()

    Environment

    
    MiniHack version: 0.1.3+57ca418
    NLE version: 0.8.1
    Gym version: 0.21.0
    PyTorch version: 1.11.0+cu102
    Is debug build: No
    CUDA used to build PyTorch: 10.2
    
    OS: Ubuntu 20.04.3 LTS
    GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
    CMake version: version 3.23.1
    
    Python version: 3.8
    Is CUDA available: Yes
    CUDA runtime version: Could not collect
    GPU models and configuration:
    GPU 0: NVIDIA GeForce RTX 3080
    GPU 1: NVIDIA GeForce RTX 3080
    
    Nvidia driver version: 510.47.03
    cuDNN version: Could not collect
    
    Versions of relevant libraries:
    [pip3] numpy==1.21.6
    [pip3] torch==1.11.0
    [conda] torch                     1.11.0                   pypi_0    pypi
    
    bug 
    opened by jlin816 2
  • [FEATURE] Suggested MiniHack Editor Webpage tweaks

    [FEATURE] Suggested MiniHack Editor Webpage tweaks

    πŸš€ Feature

    Could we populate the level with a standard level that demonstrates the individual building blocks (like the one used on the readme)? Could we also have a "clear level" button to get to the empty level that is currently shown when calling https://minihack-editor.github.io/

    enhancement 
    opened by rockt 2
  • [BUG] No module named 'minihack.version' when importing

    [BUG] No module named 'minihack.version' when importing

    When installing via pip, you get an bug after importing: "ModuleNotFoundError: No module named 'minihack.version'". This can be resolved by adding a file "version.py" with the correct info to the install directory. (version = '0.1.3+4c398d4' git_version = '4c398d480eac26883104e867280d1d3ddbcb9a20' ).

    bug 
    opened by nesou2 2
  • With it as-is, I get 'can only concatenate list (not tuple) to list'

    With it as-is, I get 'can only concatenate list (not tuple) to list'

    I can't currently run the fb-internal minihack due to this bug. Here's the obvious fix; if there's one that's more suitable, let me know.

    Basically what happened here was that when I switched from the public minihack to the fb internal one, I started getting this concat issue. I'm not 100% sure what changed, but the basic issue is that before, a list was acceptable as inputs to the observation keys, but now it isn't. By casting to the consistent type, both should be acceptable.

    CLA Signed 
    opened by SamNPowers 2
  • [BUG] Minihack does not work with NLE v0.9.0

    [BUG] Minihack does not work with NLE v0.9.0

    πŸ› Bug

    Minihack does not work with NLE v0.9.0

    To Reproduce

    Follow the Trying Out MiniHack example

    [/usr/local/lib/python3.7/dist-packages/minihack/base.py](https://localhost:8080/#) in _patch_nhdat(self, des_file)
        366         """
        367         if not des_file.endswith(".des"):
    --> 368             fpath = os.path.join(self.env._vardir, "mylevel.des")
        369             # If the des-file is passed as a string
        370             with open(fpath, "w") as f:
    
    AttributeError: 'MiniHackRiver' object has no attribute 'env'
    
    bug 
    opened by ngoodger 1
  • [FEATURE] monobeast baseline implementation

    [FEATURE] monobeast baseline implementation

    πŸš€ Feature

    current polybeast implementation has most code written in C++, requesting for mnonobeast implementation for more clarity

    Motivation

    readability/flexibility

    Pitch

    monobeast implementation will offfer more readability and flexibility

    Alternatives

    N/A

    Additional context

    N/A

    enhancement 
    opened by Andrewzh112 0
Releases(v0.1.4)
  • v0.1.4(Dec 9, 2022)

    Installing MiniHack

    Install with pip: pip install minihack==0.1.4.

    See README.md for further instructions.

    New in MiniHack v0.1.4

    • MiniHack version 0.1.4 (#67, @samvelyan)
    • Gym issue fix (#58, @samvelyan)
    • pushing the fix for more height in the logo (#49, @Bam4d)
    • [WIP] Bam4d/level editor (#46, @Bam4d)

    πŸ“ Documentation

    • Deleted level editor site code (#50, @samvelyan)

    πŸ”¨ Maintenance

    • Fixing the seeding issue (#68, @samvelyan)
    • Fixing the NetHack variable renaming and _underscore access recently introduced in NLE==0.9.0 (#66, @samvelyan)

    🎑 Environment

    • Fixing the NetHack variable renaming and _underscore access recently introduced in NLE==0.9.0 (#66, @samvelyan)
    • Fix forced actions (#55, @ian-cannon)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Mar 14, 2022)

    Installing MiniHack

    Install with pip: pip install minihack==0.1.3.

    See README.md for further instructions.

    New in MiniHack v0.1.3

    πŸ“ Documentation

    • MiniHack Environment Zoo (#38, @samvelyan)

    πŸ”¨ Maintenance

    • A flag for including pet to the game (#40, @samvelyan)

    🎑 Environment

    • Turned autopickup off for ExploreMaze envs (#45, @samvelyan)
    • Fixing boxoban level data path (#42, @samvelyan)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Nov 30, 2021)

    Installing MiniHack

    Install with pip: pip install minihack==0.1.2.

    See README.md for further instructions.

    New in MiniHack v0.1.2

    • Cached Environment Wrapper (#33, @samvelyan)
    • Printing the gym version in the collect_env script (#30, @samvelyan)
    • Update README.md (#24, @samvelyan)
    • Updating the PR labeler and Release Drafter (#23, @samvelyan)

    πŸ“ Documentation

    • Fixes to the documentation (#37, @samvelyan)
    • Updating docs (#25, @samvelyan)
    • Fixed Typo (#22, @mohamadmansourX)

    πŸ”¨ Maintenance

    • Bump the MiniHack and NLE versions (#36, @samvelyan)
    • Supporting gym version 0.21.0 (#31, @samvelyan)

    🎑 Environment

    • Fixing seeding in MiniGrid (#34, @samvelyan)
    • Supporting gym version 0.21.0 (#31, @samvelyan)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Sep 30, 2021)

    Installing MiniHack

    Install with pip: pip install minihack==0.1.1.

    See README.md for further instructions.

    New in MiniHack v0.1.1

    • Added a workflow for testing and pushing the releases to PyPI (#21, @samvelyan)
    • Importing Pillow whenever needed. (#20, @samvelyan)
    • Release drafter GitHub workflow (#19, @samvelyan)
    • Being able to save gifs when evaluating pre-trained agents (#18, @samvelyan)
    • Updating README (#16, @samvelyan)
    • Update MANIFEST.in (#15, @samvelyan)

    πŸ“ Documentation

    • Updating REAMDE (#17, @samvelyan)
    Source code(tar.gz)
    Source code(zip)
Owner
Facebook Research
Facebook Research
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
Solutions and questions for AoC2021. Merry christmas!

Advent of Code 2021 Merry christmas! πŸŽ„ πŸŽ… To get solutions and approximate execution times for implementations, please execute the run.py script in t

Wilhelm Γ…gren 5 Dec 29, 2022
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation

AttentionGAN-v2 for Unpaired Image-to-Image Translation AttentionGAN-v2 Framework The proposed generator learns both foreground and background attenti

Hao Tang 530 Dec 27, 2022
A simple code to convert image format and channel as well as resizing and renaming multiple images.

Rename-Resize-and-convert-multiple-images A simple code to convert image format and channel as well as resizing and renaming multiple images. This cod

Happy N. Monday 3 Feb 15, 2022
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
A TikTok-like recommender system for GitHub repositories based on Gorse

GitRec GitRec is the missing recommender system for GitHub repositories based on Gorse. Architecture The trending crawler crawls trending repositories

337 Jan 04, 2023
A Python package for performing pore network modeling of porous media

Overview of OpenPNM OpenPNM is a comprehensive framework for performing pore network simulations of porous materials. More Information For more detail

PMEAL 336 Dec 30, 2022
Few-shot NLP benchmark for unified, rigorous eval

FLEX FLEX is a benchmark and framework for unified, rigorous few-shot NLP evaluation. FLEX enables: First-class NLP support Support for meta-training

AI2 85 Dec 03, 2022
Official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch.

Multi-speaker DGP This repository provides official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch. O

sarulab-speech 24 Sep 07, 2022
Pytorch implementation of our paper accepted by NeurIPS 2021 -- Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) (Link) Overview Prerequisites Linu

Shaojie Li 34 Mar 31, 2022
NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM

NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM Automatic Evaluation Metric described in the papers BaryScore (EM

Pierre Colombo 28 Dec 28, 2022
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 01, 2023
Populating 3D Scenes by Learning Human-Scene Interaction https://posa.is.tue.mpg.de/

Populating 3D Scenes by Learning Human-Scene Interaction [Project Page] [Paper] License Software Copyright License for non-commercial scientific resea

Mohamed Hassan 81 Nov 08, 2022
Gans-in-action - Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks

GANs in Action by Jakub Langr and Vladimir Bok List of available code: Chapter 2: Colab, Notebook Chapter 3: Notebook Chapter 4: Notebook Chapter 6: C

GANs in Action 914 Dec 21, 2022
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
MAGMA - a GPT-style multimodal model that can understand any combination of images and language

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning Authors repo (alphabetical) Constantin (CoEich), Mayukh (Mayukh

Aleph Alpha GmbH 331 Jan 03, 2023
This repository is related to an Arabic tutorial, within the tutorial we discuss the common data structure and algorithms and their worst and best case for each, then implement the code using Python.

Data Structure and Algorithms with Python This repository is related to the Arabic tutorial here, within the tutorial we discuss the common data struc

Mohamed Ayman 33 Dec 02, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
Multiband spectro-radiometric satellite image analysis with K-means cluster algorithm

Multi-band Spectro Radiomertric Image Analysis with K-means Cluster Algorithm Overview Multi-band Spectro Radiomertric images are images comprising of

Chibueze Henry 6 Mar 16, 2022