Godot RL Agents is a fully Open Source packages that allows video game creators

Overview

Godot RL Agents

The Godot RL Agents is a fully Open Source packages that allows video game creators, AI researchers and hobbiest the opportunity to learn complex behaviors for their Non Player Characters or agents. This repository provides:

  • An interface between games created in Godot and Machine Learning algorithms running in Python
  • Access to 21 state of the art Machine Learning algorithms, provided by the Ray RLLib framework.
  • Support for memory-based agents, with LSTM or attention based interfaces
  • Support for 2D and 3D games
  • A suite of AI sensors to augment your agent's capacity to observe the game world
  • Godot and Godot RL agents are completely free and open source under the very permissive MIT license. No strings attached, no royalties, nothing.
godot_rl_agents_trailer_v01_20211008.mp4

Contents

  1. Motivation
  2. Citing Godot RL Agents
  3. Installation
  4. Examples
  5. Documentation
  6. Roadmap
  7. FAQ
  8. Licence
  9. Acknowledgments
  10. References

Motivation

Over the next decade advances in AI algorithms, notably in the fields of Machine Learning and Deep Reinforcement Learning, are primed to revolutionize the Video Game industry. Customizable enemies, worlds and story telling will lead to diverse gameplay experiences and new genres of games. Currently the field is dominated by large organizations and pay to use engines that have the budget to create such AI enhanced agents. The objective of the Godot RL Agents package is to lower the bar of accessability so that game developers can take their idea from creation to publication end-to-end with an open source and free package.

Citing Godot RL Agents

@misc{beeching2021godotrlagents,
  author = {Edward Beeching},
  title = {Godot RL agents},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/edbeeching/godot_rl_agents}},
}

Installation

Please follow the installation instructions to install Godot RL agents.

Examples

We provide several reference implementations and instructions to implement your own environment, please refer to the Examples documentation.

Creating custom environments

Once you have studied the example environments, you can follow the instructions in Custom environments in order to make your own.

Roadmap

We have number features that will soon be available in versions 0.2.0 and 0.3.0. Refer to the Roadmap for more information.

FAQ

  1. Why have we developed Godot RL Agents? The objectives of the framework are to:
  • Provide a free and open source tool for Deep RL research and game development.
  • Enable game creators to imbue their non-player characters with unique * behaviors.
  • Allow for automated gameplay testing through interaction with an RL agent.
  1. How can I contribute to Godot RL Agents? Please try it out, find bugs and either raise an issue or if you fix them yourself, submit a pull request.
  2. When will you be providing Mac support? I would like to provide this ASAP but I do not own a mac so I cannot perform any manual testing of the codebase.
  3. Can you help with my game project? If the game example do not provide enough information, reach out to us on github and we may be able to provide some advice.
  4. How similar is this tool to Unity ML agents? We are inspired by the the Unity ML agents toolkit and make no effort to hide it.

Licence

Godot RL Agents is MIT licensed. See the LICENSE file for details.

"Cartoon Plane" (https://skfb.ly/UOLT) by antonmoek is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

Acknowledgments

We thank the authors of the Godot Engine for providing such a powerful and flexible game engine for AI agent development. We thank the developers at Ray and Stable baselines for creating easy to use and powerful RL training frameworks. We thank the creators of the Unity ML Agents Toolkit, which inspired us to create this work.

References

Comments
  • How do I use rllib for the examples provided?

    How do I use rllib for the examples provided?

    SO, I found out that sample-factory is not supported on Windows OS. And rllib is the only backend that successfully installed on my pc. So, how can I use rllib to run the examples provided and make my own RL environments with it.

    opened by ryash072007 13
  • Unable to install RL agents.

    Unable to install RL agents.

    It says package not found:

    (base) PS C:\Users\Jetpackjules\Downloads\godot_rl_agents-0.2.2> conda env create Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • libffi=3.3
    • libunistring=0.9.10
    • libopus=1.3.1
    • libtasn1=4.16.0
    • openh264=2.1.1
    • x264=1!157.20191217
    • libidn2=2.3.2
    • libvpx=1.7.0
    • _openmp_mutex=4.5
    • lame=3.100
    • ncurses=6.3
    • gmp=6.2.1
    • freetype=2.11.0
    • gnutls=3.6.15
    • readline=8.1.2
    • nettle=3.7.3
    • libgcc-ng=9.3.0
    • libgomp=9.3.0
    • libstdcxx-ng=9.3.0
    • ld_impl_linux-64=2.35.1
    opened by Jetpackjules11 7
  • Installation Help

    Installation Help

    I am a complete novice to github and conda and I am having trouble installing (likely user error). Looking for specific help or general guidance on where to go for help. I am on Windows. Seems solving environment fails, maybe has to do with linux-64 line or prefix at bottom of .ym file being to an unkown directory. Thanks in advance for any advice.

    Installed the full anaconda so I could use the Navigator Opened powershell prompt cd to the directory with the godot_rl_agents folder and enviroment.ym; ran "conda env create" output "Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • ld_impl_linux-64=2.35.1"
    opened by Quantemplation 4
  • Solving environment: failed  ResolvePackageNotFound when creating environment in Windows

    Solving environment: failed ResolvePackageNotFound when creating environment in Windows

    Hello Ed!

    I've tried following the install instructions for Windows but I get the following error:

    (base) PS F:\Repos\godot_rl_agents> conda env create
    Collecting package metadata (repodata.json): done
    Solving environment: failed
    
    ResolvePackageNotFound:
      - zstd==1.4.9=haebb681_0
      - openssl==1.1.1m=h7f8727e_0
      - cudatoolkit==11.3.1=h2bc3f7f_2
      - _openmp_mutex==4.5=1_gnu
      - jpeg==9d=h7f8727e_0
      - freetype==2.11.0=h70c0345_0
      - libstdcxx-ng==9.3.0=hd4cf53a_17
      - ca-certificates==2022.2.1=h06a4308_0
      - lz4-c==1.9.3=h295c915_1
      - nettle==3.7.3=hbbd107a_1
      - mkl_fft==1.3.1=py38hd3c417c_0
      - lame==3.100=h7b6447c_0
      - bzip2==1.0.8=h7b6447c_0
      - gnutls==3.6.15=he1e5248_0
      - ld_impl_linux-64==2.35.1=h7274673_9
      - libgomp==9.3.0=h5101ec6_17
      - openh264==2.1.1=h4ff587b_0
      - pytorch==1.11.0=py3.8_cuda11.3_cudnn8.2.0_0
      - certifi==2021.10.8=py38h06a4308_2
      - x264==1!157.20191217=h7b6447c_0
      - libwebp-base==1.2.2=h7f8727e_0
      - ncurses==6.3=h7f8727e_2
      - pillow==9.0.1=py38h22f2fdc_0
      - cryptography==36.0.0=py38h9ce1e76_0
      - mkl-service==2.4.0=py38h7f8727e_0
      - lcms2==2.12=h3be6417_0
      - libuv==1.40.0=h7b6447c_0
      - gmp==6.2.1=h2531618_2
      - tk==8.6.11=h1ccaba5_0
      - python==3.8.12=h12debd9_0
      - libvpx==1.7.0=h439df22_0
      - numpy==1.21.2=py38h20f2e39_0
      - mkl_random==1.2.2=py38h51133e4_0
      - libunistring==0.9.10=h27cfd23_0
      - pip==21.2.4=py38h06a4308_0
      - mkl==2021.4.0=h06a4308_640
      - xz==5.2.5=h7b6447c_0
      - intel-openmp==2021.4.0=h06a4308_3561
      - ffmpeg==4.2.2=h20bf706_0
      - libtasn1==4.16.0=h27cfd23_0
      - numpy-base==1.21.2=py38h79a1101_0
      - brotlipy==0.7.0=py38h27cfd23_1003
      - libopus==1.3.1=h7b6447c_0
      - libtiff==4.2.0=h85742a9_0
      - libwebp==1.2.2=h55f646e_0
      - libffi==3.3=he6710b0_2
      - libgcc-ng==9.3.0=h5101ec6_17
      - libidn2==2.3.2=h7f8727e_0
      - setuptools==58.0.4=py38h06a4308_0
      - pysocks==1.7.1=py38h06a4308_0
      - zlib==1.2.11=h7f8727e_4
      - sqlite==3.38.0=hc218d9a_0
      - giflib==5.2.1=h7b6447c_0
      - readline==8.1.2=h7f8727e_1
      - libpng==1.6.37=hbc83047_0
      - cffi==1.15.0=py38hd667e15_1
    

    It seems like conda is unable to find those packages on Windows. I think it's due to the build numbers (ex zstd==1.4.9=haebb681_0) referencing a build for a different platform. I've created a new environment specification where I've removed them with conda env export -n gdrl_conda -f .\environment.yml --no-builds and was able to create the environment with the original command conda env create.

    opened by PhilippeMarcotte 4
  • People who want to use SF in windows, read this:

    People who want to use SF in windows, read this:

    For people who want to use SF in windows OS because of its features, I recommend WSL. Ill update this issue with my progress and possible problems you may face trying to get WSL and/or get SF in it.

    opened by ryash072007 3
  • Training stuck in

    Training stuck in "PENDING" status and editor not connecting

    I followed the installation instructions provided, everything goes well, but couldn't train nor use the pretrained models from any of the example envs. First of all when I use the following command:

    gdrl --env_path envs/builds/JumperHard/jumper_hard.x86_64 --config_path envs/configs/ppo_config_jumper_hard.yaml

    It says

    usage: gdrl [-h] [--env_path ENV_PATH] [-f CONFIG_FILE] [-c RESTORE] [-e] gdrl: error: unrecognized arguments: --config_path envs/configs/ppo_config_jumper_hard.yaml

    So I just changed the argument --config_path to -f and now it works, but...

    == Status == Memory usage on this node: 6.1/15.5 GiB Using FIFO scheduling algorithm. Resources requested: 0/4 CPUs, 0/0 GPUs, 0.0/7.38 GiB heap, 0.0/3.69 GiB objects Result logdir: /home/hibiscus-tea/ray_results/PPO/jumper_hard Number of trials: 1/1 (1 PENDING) +-----------------------+----------+-------+ | Trial name | status | loc | |-----------------------+----------+-------| | PPO_godot_0479d_00000 | PENDING | | +-----------------------+----------+-------+

    It stays like that forever. Neither running jumper_hard.x86_64 or running the game from the editor changes anything. If I use the pretrained model command it stays the same. I tried the same process on Windows 10 and I get the same results. I think I am missing something. The editor outputs this:

    getting command line arguments Waiting for one second to allow server to start trying to connect to server 03

    If I change the const DEFAULT_PORT to 6007 (the default godot port) it outputs this:

    getting command line arguments Waiting for one second to allow server to start trying to connect to server 02 performing handshake server disconnected, closing

    I hope you help me with this issue. This project looks amazing and I am looking forward to the multi-agents update. :)

    opened by AleryBerry 3
  • TypeError: '>=' not supported between instances of 'list' and 'int'

    TypeError: '>=' not supported between instances of 'list' and 'int'

    Traceback (most recent call last): File "C:\Users\ryash\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\ryash\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\Scripts\gdrl.exe_main.py", line 7, in File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\main.py", line 108, in main training_function(args, extras) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\wrappers\stable_baselines_wrapper.py", line 78, in stable_baselines_training env = StableBaselinesGodotEnv() File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\wrappers\stable_baselines_wrapper.py", line 12, in init self.env = GodotEnv(port=port, seed=seed) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\core\godot_env.py", line 44, in init self._get_env_info() File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\core\godot_env.py", line 235, in _get_env_info observation_spaces[k] = spaces.Discrete(v["size"]) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\gym\spaces\discrete.py", line 15, in init assert n >= 0 TypeError: '>=' not supported between instances of 'list' and 'int'

    opened by ryash072007 2
  • Installation Problems

    Installation Problems

    Hi there,

    I am currently looking into your project and it looks super interesting.

    Unfortunately I have troubles installing the environment on windows. The first errors occur when running the instruction conda env create from the installation guide. See Screenshot: Screenshot 2022-10-23 112009

    Could it be that you are using packages for linux only? _openmp_mutex=4.5 seems to be one of them. Is there a way to get this project running on windows? Would be cool, because I am consider using it for my master thesis.

    Cheers!

    opened by visuallization 2
  • Reward always displayed as nan

    Reward always displayed as nan

    Hello,

    I am having another issue, the rewards are always displayed as nan in the console, like this:

    == Status ==
    Current time: 2022-06-21 15:40:17 (running for 00:04:32.32)
    Memory usage on this node: 14.3/31.3 GiB
    Using FIFO scheduling algorithm.
    Resources requested: 2.0/16 CPUs, 1.0/1 GPUs, 0.0/13.01 GiB heap, 0.0/6.5 GiB objects (0.0/1.0 accelerator_type:G)
    Result logdir: /home/ls11det/ray_results/PPO/editor
    Number of trials: 1/1 (1 RUNNING)
    +-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------+
    | Trial name            | status   | loc                   |   iter |   total time (s) |   ts |   reward |   episode_reward_max |   episode_reward_min |   episode_len_mean |
    |-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------|
    | PPO_godot_0dbb4_00000 | RUNNING  | 129.217.38.190:865027 |      3 |          208.046 | 3072 |      nan |                  nan |                  nan |                nan |
    +-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------+
    

    I even tried just giving back a number as reward to see if any of my code was causing the issue, but it is still displayed as nan:

    func get_reward():
    	# What behavior do you want to reward, kills? penalties for death, key waypoints
    	return 0.5
    

    I also printed in the sync.gd script where it collects and sends the reward and it picks up the 0.5 correctly. Is there anything I am missing here?

    opened by themars2011 2
  • BallChase example: Does best_fruit_distance need a reset after collection?

    BallChase example: Does best_fruit_distance need a reset after collection?

    I am not sure if I understand the examples correctly. In the BallChase example best_fruit_distance is initialized and reset in the reset() method. But shouldn't it also be reset after every fruit collection? Only the distance reduction to the first fruit gets rewarded at the moment.

    bug 
    opened by mischkadb 2
  • Errors with default config: KeyError

    Errors with default config: KeyError "observation_space"

    Hi, I just installed godot_rl_agents as described in the installation instructions. I have been trying to train an agent for one of the default envs but I get the following error

    (pid=38965) KeyError: 'observation_space'
    (pid=38965) SCRIPT ERROR: handle_message: Invalid get index 'type' (on base: 'Nil').
    (pid=38965)    At: res://addons/godot_rl_agents/sync.gdc:172.
    Traceback (most recent call last):
      File "/home/ashutosh/HDD/anaconda3/envs/godot_rl/bin/gdrl", line 33, in <module>
        sys.exit(load_entry_point('godot-rl-agents', 'console_scripts', 'gdrl')())
      File "/home/ashutosh/HDD/MachineLearning/godot_rl_agents/godot_rl_agents/core/main.py", line 91, in main
        results = tune.run(
      File "/home/ashutosh/HDD/anaconda3/envs/godot_rl/lib/python3.8/site-packages/ray/tune/tune.py", line 555, in run
        raise TuneError("Trials did not complete", incomplete_trials)
    

    I also manually tried printing json_dict and here are the contents:

    {'algorithm': 'PPO', 'stop': {'episode_reward_mean': 5000, 'training_iteration': 1000, 'timesteps_total': 200000000}, 'config': {'env': 'godot', 'env_config': {'framerate': None, 'action_repeat': None, 'show_window': False, 'seed': 0, 'env_path': 'envs/builds/BallChase/ball_chase.x86_64'}, 'framework': 'torch', 'lambda': 0.95, 'gamma': 0.95, 'vf_clip_param': 100.0, 'clip_param': 0.2, 'entropy_coeff': 0.001, 'entropy_coeff_schedule': None, 'train_batch_size': 1024, 'sgd_minibatch_size': 128, 'num_sgd_iter': 16, 'num_workers': 4, 'lr': 0.0003, 'num_envs_per_worker': 16, 'batch_mode': 'truncate_episodes', 'rollout_fragment_length': 32, 'num_gpus': 1, 'model': {'fcnet_hiddens': [256, 256], 'num_framestacks': 4}, 'no_done_at_end': True, 'soft_horizon': True}}
    

    Here's the full log : https://www.toptal.com/developers/hastebin/epovenonow.yaml

    Do I absolutely need to keep Godot editor open ? I'm currently using the ball_chase.x86_64 from the repo

    Lastly, opening an environment in godot opens with 16 agents together. Is there a way to fix this ?

    opened by ashutoshbsathe 2
  • Unable to open any example in the godot editor

    Unable to open any example in the godot editor

    I just get a message that says "the following file does not specify the version of godot with which it was created. If you proceed with opening it, it will be configured for godot's file format" and when I force open the project immediatly closes. (this means I can't run "gdrl.interactive")

    I also noticed that ryash072007 managed to get sb3 working to some extent, and would greatly appreciate any advice on how to accomplish that.

    (I am using Anaconda Powershell prompt and Godot 3.5.1)

    opened by Jetpackjules11 4
  • What may be happening if Godot freezes when performing handshake?

    What may be happening if Godot freezes when performing handshake?

    I'm using a Linux VM to run the sf part of the training and am using port-forwarding to allow it to communicate to my host computer. However, while performing handshake, the game just gets stuck. I have tried debugging this but nothing worked. Do you know what may be happening?

    opened by ryash072007 4
  • Export model to ONNX

    Export model to ONNX

    this is a suggestion/request in which I want to contribute, I have started work on this feature (which I have committed to my fork), but I am not well versed on Torch code, though I have gotten to the point where the model gets loaded from the checkpoint, I get an error saying I need to pass a Tensor of shape [...,8] to the torch.onnx.export function

    opened by yaelatletl 6
  • Using TorchSharp in Godot

    Using TorchSharp in Godot

    Hi, Ed! I have a problem with using TorchSharp nuget lib in Godot C# version. Every time I try to use it in godot I get the error like:

    System.DllNotFoundException: LibTorchSharp assembly: unknown assembly type: unknown type member:

    But the same code can work in a regular console project without godot involved.

    I see you mentioned in other issue #https://github.com/virtualmlnet/hackathon-2021/issues/6#issuecomment-968059783 that you have tried the torchsharp, it seems that it can work but just nor support onnx format. If so, can you share how you configure the godot project to let it work with torchsharp? or maybe you can share a demo project ?

    opened by HangedDream 1
  • Questions on performance and headless

    Questions on performance and headless

    Hi @edbeeching

    thanks for your API!

    I've got two questions: In your paper you state that 12k interactions per second are recorded. How many environments ran in parallel for this results? Do you need X for running environments featuring visual observations? Your roadmap says that headless is not supported yet.

    I'm basically looking for alternatives to ml-agents that run significantly faster. Like one Unity build with only one environment is capable of only generating like 200-300 interactions per second.

    opened by MarcoMeter 1
Releases(v0.2.2)
  • v0.2.2(Apr 21, 2022)

  • v0.2.1(Mar 28, 2022)

  • v0.2.0(Mar 24, 2022)

    Implemented a number of features, bug fixes and improvements to the documentation.

    • Including an updated sensor suite.
    • New checkpoints for the updated sensors.
    • The conda environment should now work out of the box and support GPUs. #8 #9
    • Fixed a bug with the reward function in the BallChase env #11
    • Improved documentation #7
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 17, 2021)

Owner
Edward Beeching
PhD Student in Deep Reinforcement Learning at INRIA, Chroma research group, INSA Lyon, France.
Edward Beeching
[ICCV 2021] Relaxed Transformer Decoders for Direct Action Proposal Generation

RTD-Net (ICCV 2021) This repo holds the codes of paper: "Relaxed Transformer Decoders for Direct Action Proposal Generation", accepted in ICCV 2021. N

Multimedia Computing Group, Nanjing University 80 Nov 30, 2022
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

342 Dec 02, 2022
The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies

REST The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies. Usage Download dataset Download

DMIRLAB 2 Mar 13, 2022
Implementation of momentum^2 teacher

Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning Requirements All experiments are done with python3.6, torch

jemmy li 121 Sep 26, 2022
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
A distributed, plug-n-play algorithm for multi-robot applications with a priori non-computable objective functions

A distributed, plug-n-play algorithm for multi-robot applications with a priori non-computable objective functions Kapoutsis, A.C., Chatzichristofis,

Athanasios Ch. Kapoutsis 5 Oct 15, 2022
DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.

Responsible Machine Learning With Great Power Comes Great Responsibility. Voltaire (well, maybe) How to develop machine learning models in a responsib

Model Oriented 590 Dec 26, 2022
Automatically erase objects in the video, such as logo, text, etc.

Video-Auto-Wipe Read English Introduction:Here   本人不定期的基于生成技术制作一些好玩有趣的算法模型,这次带来的作品是“视频擦除”方向的应用模型,它实现的功能是自动感知到视频中我们不想看见的部分(譬如广告、水印、字幕、图标等等)然后进行擦除。由于图标擦

seeprettyface.com 141 Dec 26, 2022
Structure-Preserving Deraining with Residue Channel Prior Guidance (ICCV2021)

SPDNet Structure-Preserving Deraining with Residue Channel Prior Guidance (ICCV2021) Requirements Linux Platform NVIDIA GPU + CUDA CuDNN PyTorch == 0.

41 Dec 12, 2022
A quantum game modeling of pandemic (QHack 2022)

Contributors: @JongheumJung, @YoonjaeChung, @GyunghunKim Abstract In the regime of a global pandemic, leaders around the world need to consider variou

Yoonjae Chung 8 Apr 03, 2022
PuppetGAN - Cross-Domain Feature Disentanglement and Manipulation just got way better! 🚀

Better Cross-Domain Feature Disentanglement and Manipulation with Improved PuppetGAN Quite cool... Right? Introduction This repo contains a TensorFlow

Giorgos Karantonis 5 Aug 25, 2022
A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python

Mesh-Keys A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python Have been seeing alot

Joseph 53 Dec 13, 2022
Cave Generation using metaballs in Blender. Originally created by sdfgeoff, Edited by Myself (Archie Jaskowicz).

Blender-Cave-Generation Cave Generation using metaballs in Blender. Originally created by sdfgeoff, Edited by Myself (Archie Jaskowicz). Installation

2 Dec 28, 2022
StyleGAN2-ada for practice

This version of the newest PyTorch-based StyleGAN2-ada is intended mostly for fellow artists, who rarely look at scientific metrics, but rather need a working creative tool. Tested on Python 3.7 + Py

vadim epstein 170 Nov 16, 2022
PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments.

MemSeg: Memory-based semantic segmentation for off-road unstructured natural environments Introduction This repository is a PyTorch implementation of

11 Nov 28, 2022
Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes

Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized C

Sam Bond-Taylor 139 Jan 04, 2023
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
CAST: Character labeling in Animation using Self-supervision by Tracking

CAST: Character labeling in Animation using Self-supervision by Tracking (Published as a conference paper at EuroGraphics 2022) Note: The CAST paper c

15 Nov 18, 2022
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 06, 2022
Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Utkarsh Agiwal 1 Feb 03, 2022