COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

Related tags

Deep Learningcovins
Overview

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

Version 1.0

COVINS is an accurate, scalable, and versatile visual-inertial collaborative SLAM system, that enables a group of agents to simultaneously co-localize and jointly map an environment.

COVINS provides a server back-end for collaborative SLAM, running on a local machine or a remote cloud instance, generating collaborative estimates from map data contributed by different agents running Visual-Inertial Odomety (VIO) and sharing their map with the back-end. COVINS also provides a generic communication module to interface the keyframe-based VIO of your choice. Here, we provide an example of COVINS interfaced with the VIO front-end of ORB-SLAM3. We provide guidance and examples how to run COVINS on the EuRoC dataset.

Index

  1. Related Publications
  2. License
  3. Basic Setup
  4. Running COVINS
  5. Docker Implementation
  6. Extended Functionalities
  7. Limitations and Known Issues

1 Related Publications

[COVINS] Patrik Schmuck, Thomas Ziegler, Marco Karrer, Jonathan Perraudin and Margarita Chli. COVINS: Visual-Inertial SLAM for Centralized Collaboration. IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021. PDF

[Redundancy Detection] Patrik Schmuck and Margarita Chli. On the Redundancy Detection in Keyframe-based SLAM. IEEE International Conference on 3D Vision (3DV), 2019. PDF.

[System Architecture] Patrik Schmuck and Margarita Chli. CCM‐SLAM: Robust and Efficient Centralized Collaborative Monocular Simultaneous Localization and Mapping for Robotic Teams. Journal of Field Robotics (JFR), 2019. PDF

[Collaborative VI-SLAM] Patrik Schmuck, Marco Karrer and Margarita Chli. CVI-SLAM - Collaborative Visual-Inertial SLAM. IEEE Robotics and Automation Letters (RA-L), 2018. PDF

Video:

Mesh

2 License

COVINS is released under a GPLv3 license. For a list of code/library dependencies (and associated licenses), please see thirdparty_code.md.

For license-related questions, please contact the authors: collaborative (dot) slam (at) gmail (dot) com.

If you use COVINS in an academic work, please cite:

@article{schmuck2021covins,
  title={COVINS: Visual-Inertial SLAM for Centralized Collaboration},
  author={Schmuck, Patrik and Ziegler, Thomas and Karrer, Marco and Perraudin, Jonathan and Chli, Margarita},
  journal={arXiv preprint arXiv:2108.05756},
  year={2021}
}

3 Basic Setup

This section explains how you can build the COVINS server back-end, as well as the provided version of the ORB-SLAM3 front-end able to communicate with the back-end. COVINS was developed under Ubuntu 18.04, and we provide installation instructions for 18.04 as well as 20.04. Note that we also provide a Docker implementation for simplified deployment of COVINS.

Environment Setup

Dependencies

  • sudo apt-get update
  • Doxygen: sudo apt-get install doxygen
  • SuiteSparse: sudo apt-get install libsuitesparse-dev
  • YAML: sudo apt-get install libyaml-cpp-dev
  • VTK: sudo apt-get install libvtk6-dev
  • catkin_tools (from the catkin_tools manual)
    • sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu `lsb_release -sc` main" > /etc/apt/sources.list.d/ros-latest.list'
    • wget http://packages.ros.org/ros.key -O - | sudo apt-key add -
    • sudo apt-get update
    • sudo apt-get install python3-catkin-tools
  • ws_tools: sudo apt-get install python3-wstool
  • OMP: sudo apt-get install libomp-dev
  • Glewsudo apt install libglew-dev
  • ROS

Set up your workspace

This will create a workspace for COVINS as ~/ws/covins_ws. All further commands will use this path structure - if you decide to change the workspace path, you will need to adjust the commands accordingly.

  • cd ~
  • mkdir -p ws/covins_ws/src
  • cd ~/ws/covins_ws
  • catkin init
  • ROS Setup
    • U18/Melodic: catkin config --extend /opt/ros/melodic/
    • U20/Noetic: catkin config --extend /opt/ros/noetic/
  • catkin config --merge-devel
  • catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo

COVINS Installation

We provide a script (covins/install_file.sh) that will perform a full installation of COVINS, including back-end, front-end, and third-party packages, if the environment is set up correctly. If the installation fails, we strongly recommend executing the steps in the build script manually one by one. The script might not perform a correct installation under certain circumstances if executed multiple times.

  • cd ~/ws/covins_ws/src
  • git clone https://github.com/VIS4ROB-lab/covins.git
  • cd ~/ws/covins_ws
  • chmod +x src/covins/install_file.sh
  • ./src/covins/install_file.sh 8
    • The argument 8 is optional, and specifies the number of jobs the build process should use.

Generally, when the build process of COVINS or ORB-SLAM3 fails, make sure you have correctly sourced the workspace, and that the libraries in the third-party folders, such as DBoW2 and g2o are built correctly.

A remark on fix_eigen_deps.sh: compiling code with dependencies against multiple Eigen versions is usually fatal and must be avoided. Therefore, we specify and download the Eigen version explicitly through the eigen_catkin package, and make sure all Eigen dependencies point to this package.

Installing ROS Support for the ORB-SLAM3 Front-End

If you want to use rosbag files to pass sensor data to COVINS, you need to explicitly build the ORB-SLAM3 front-end with ROS support.

  • Install vision_opencv:
    • cd ~/ws/covins_ws/src
    • Clone: git clone https://github.com/ros-perception/vision_opencv.git
    • Check out the correct branch
      • U18/Melodic: git checkout melodic
      • U20/Noetic: git checkout noetic
    • Go to ~/ws/covins_ws/src/vision_opencv/cv_bridge/CMakeLists.txt
    • Add the opencv3_catkin dependency: change the line find_package(catkin REQUIRED COMPONENTS rosconsole sensor_msgs) to find_package(catkin REQUIRED COMPONENTS rosconsole sensor_msgs opencv3_catkin)
    • If you are running Ubuntu 20 (or generally have OpenCV 4 installed): remove the lines that search for an OpenCV 4 version in the CMakeLists.txt
    • source ~/ws/covins_ws/devel/setup.bash
    • catkin build cv_bridge
    • [Optional] Check correct linkage:
      • cd ~/ws/covins_ws/devel/lib
      • ldd libcv_bridge.so | grep opencv_core
      • This should only list libopencv_core.so.3.4 as a dependency
  • catkin build ORB_SLAM3
  • [Optional] Check correct linkage:
    • ~/ws/covins_ws/src/covins/orb_slam3/Examples/ROS/ORB_SLAM3
      • ldd Mono_Inertial | grep opencv_core
      • This should mention libopencv_core.so.3.4 as the only libopencv_core dependency

4 Running COVINS

This section explains how to run COVINS on the EuRoC dataset. If you want to use a different dataset, please do not forget to use a correct parameter file instead of covins/orb_slam3/Examples/Monocular-Inertial/EuRoC.yaml.

Setting up the environment

  • In ~/ws/covins_ws/src/covins/covins_comm/config/config_comm.yaml: adjust the value of sys.server_ip to the IP of the machine where the COVINS back-end is running
  • In every of the provided scripts to run the ORB-SLAM3 front-end (e.g., euroc_examples_mh1.sh, in orb_slam3/covins_examples/), adjust pathDatasetEuroc to the path where the dataset has been uncompressed. The default expected path is /MH_01_easy/mav0/... (for euroc_examples_mh1.sh, in this case)
  • In ~/ws/covins_ws/src/covins/covins_backend/config/config_backend.yaml: adjust the path of sys.map_path0 to the directory where you would like to load maps from.

Running the COVINS Server Back-End

  • Source your workspace: source ~/ws/covins_ws/devel/setup.bash
  • In a terminal, start a roscore: roscore
  • Start the COVINS backend by executing rosrun covins_backend covins_backend_node

Running the ORB-SLAM3 Front-End

Example scripts are provided in orb_slam3/covins_examples/. Don't forget to correctly set the dataset path in every script you want to use (see above: Setting up the environment). You can also check the original ORB-SLAM3 Repo for help on how to use the ORB-SLAM3 front-end.

  • Download the EuRoC dataset (ASL dataset format)
  • Source your workspace: source ~/ws/covins_ws/devel/setup.bash
  • Execute one of the example scripts provided in the orb_slam3/ folder, such as euroc_examples_mh123_vigba
    • euroc_examples_mhX.sh runs the front-end with a single sequence from EuRoC MH1-5.
    • euroc_examples_mh123_vigba.sh runs a 3-agent collaborative SLAM session (sequential) followed by Bundle Adjustment.
    • euroc_examples_mh12345_vigba.sh runs a 5-agent collaborative SLAM session (sequential) followed by Bundle Adjustment.
    • Multiple front-ends can run in parallel. The front-ends can run on the same machine, or on different machines connected through a wireless network. However, when running multiple front-ends on the same machine, note that the performance of COVINS might degrade if the computational resources are overloaded by running too many agents simultaneously.
    • Common error sources:
      • If the front-end is stuck after showing Loading images for sequence 0...LOADED!, most likely your dataset path is wrong.
      • If the front-end is stuck after showing --> Connect to server or shows an error message Could no establish connection - exit, the server is not reachable - the IP might be incorrect, you might have forgotten to start the server, or there is a problem with your network (try pinging the server IP)

COVINS does not support resetting the map onboard the agent. Since map resets are more frequent at the beginning of a session or dataset, for example due to faulty initialization, in the current implementation, the COVINS communication module is set up such that it only starts sending data if a pre-specified number of keyframes was already created by the front-end. This number is specified by comm.start_sending_after_kf in covins/covins_comm/config/config_comm.yaml, and is currently set to 50. Also check Limitations for more details.

Visualization

COVINS provides a config file for visualization with RVIZ (covins.rviz in covins_backend/config/)

  • Run tf.launch in covins_backend/launch/ to set up the coordinate frames for visualization: roslaunch ~/ws/covins_ws/src/covins/covins_backend/launch/tf.launch
  • Launch RVIZ: rviz -d ~/ws/covins_ws/src/covins/covins_backend/config/covins.rviz
    • Covisibility edges between keyframes of from different agents are shown in red, while edges between keyframes from the same agent are colored gray (those are not shown by default, but can be activated in RVIZ).
    • In case keyframes are visualized, removed keyframes are displayed in red (keyframes are not shown by default, but can be activated in RVIZ).
    • The section VISUALIZATION in config_backend.yaml provides several options to modify the visualization.

User Interaction

COVINS provides several options to interact with the map held by the back-end. This is implemented through ROS services.

  • Make sure your workspace is sourced: source ~/ws/covins_ws/devel/setup.bash
  • Map save: rosservice call /covins_savemap - this saves the map associated to the agent specified by AGENT_ID.
    • The map will be saved to the folder ..../covins_backend/output/map_data. Make sure the folder is empty, before you save a map (COVINS performs a brief check - if a folder named keyframes/ or mappoints/ exists in the target directory, it will show an error and abort the map save process. Any other files or folders will not result in an error though).
  • Map load: rosservice call /covins_loadmap 0 - loads a map stored on disk, from the folder specified by sys.map_path0 in config_backend.yaml.
    • Note: map load needs to be performed before registering any agent.
    • 0 specifies the operation mode of the load functionality. 0 means "standard" map loading, while 1 and 2 will perform place recognition (1) and place recognition and PGO (2). Note that both modes with place recognition are experimental, only "standard" map load is tested and supported for the open-source version of COVINS.
  • Bundle Adjustment: rosservice call /covins_gba - Performs visual-inertial bundle adjustemt on the map associated to the agent specified by AGENT_ID. Modes: 0: BA without outlier rejection, 1: BA with outlier rejection.
  • Map Compression / Redundancy Removal: rosservice call /covins_prunemap - performs redundancy detection and removal on the map associated to the agent specified by AGENT_ID.
    • MAX_KFs specifies the target number of keyframes held by the compressed map. If MAX_KFs=0, the threshold value for measuring redundancy specified by the parameter kf_culling_th_red in config_backend.yaml will be used.
    • All experiments with COVINS were performed specifying the target keyframe count. Therefore, we recommend resorting to this functionality.
    • The parameter kf_culling_max_time_dist in config_backend.yaml specifies a maximum time delta permitted between two consecutive keyframes, in order to ensure limit the error of IMU integration. If no keyframe can removed without violating this constraint, map compression will stop, even if the target number of keyframes is not reached.
  • Note: After a map merge of the maps associated to Agent 0 and Agent 1, the merged map is associated to both agents, i.e. rosservice call /covins_savemap 0 and rosservice call /covins_savemap 1 will save the same (shared) map.

Parameters

COVINS provides two parameter files to adjust the behavior of the system and algorithms.

  • ../covins_comm/config/config_comm.yaml contains all parameters related to communication and the agent front-end.
  • ../covins_backend/config/config_backend.yaml contains all parameters related to the server back-end.

The user should not be required to change any parameters to run COVINS, except paths and the server IP, as explained in this manual.

Output Files

  • COVINS automatically saves the trajectory estimates of each agent to a file in covins_backend/output. The file KF_ .csv stores the poses associated to the agent specified by AGENT_ID.

Running COVINS with ROS

  • Make sure your workspace is sourced: source ~/ws/covins_ws/devel/setup.bash
  • In ~/ws/covins_ws/src/covins/orb_slam3/launch_ros_euroc.launch: adjust the paths for voc and cam
  • cd to orb_slam3/ and run roslaunch launch_ros_euroc.launch
  • run the rosbag file, e.g. rosbag play MH_01_easy.bag
    • When using COVINS with ROS, we recommend skipping the initialization sequence performed at the beginning of each EuRoC MH trajectory. ORB-SLAM3 often performs a map reset after this sequence, which is not supported by COVINS and will therefore cause an error. For example, for MH1, this can be easily done by running rosbag play MH_01_easy.bag --start 45. (Start at: MH01: 45s; MH02: 35s; MH03-05: 15s)

5 Docker Implementation

We provide COVINS also as a Docker implementation. A guide how to install docker can be found here https://docs.docker.com/engine/install/. To avoid the need of sudo when running the commands below you can add your user to the docker group.

sudo usermod -aG docker $USER (see https://docs.docker.com/engine/install/linux-postinstall/)

Building the docker image

Build the docker file using the Make file provided in the docker folder. Provide the number of jobs make and catkin build should use. This can take a while. If the build fails try again with a reduced number of jobs value.

  • make build NR_JOBS=14

Running the docker image

The docker image can be used to run different parts of COVINS (e.g. server, ORB-SLAM3 front-end, ...).

ROS core

To start the roscore one can either use the host system ROS implementation (if ROS is installed). Otherwise, it can be started using the docker image.

  • ./run.sh -c

COVINS Server Back-End

The convins server back-end needs a running roscore, how to start one see above. Furthermore, the server needs two configuration files, one for the communication server on one for the back-end. These two files need to be linked when running the docker image.

  • ./run.sh -s ../covins_comm/config/config_comm.yaml ../covins_backend/config/config_backend.yaml

ORB-SLAM3 Front-End

The ORB-SLAM3 front-end client needs the communication server config file, the file which should be executed, and the path to the dataset. The dataset has to be given seperately since the file system of the docker container differs from the host system. Hence, the pathDatasetEuroc variable in the run script gets adapted automatically inside the docker container.

  • ./run.sh -o ../covins_comm/config/config_comm.yaml ../orb_slam3/covins_examples/euroc_examples_mh1

ORB-SLAM3 ROS Front-End

The ROS wrapper of the ORB-SLAM3 front-end can also be started in the docker container. It requires the server config file and the ROS launch file. A bag file can then for example be played on the host system.

  • ./run.sh -r ../covins_comm/config/config_comm.yaml ../orb_slam3/Examples/ROS/ORB_SLAM3/launch/launch_docker_ros_euroc.launch

Terminal

A terminal within the docker image can also be opened. This can for example be used to send rosservice commands.

  • ./run.sh -t

6 Extended Functionalities

Interfacing a Custom VIO System with COVINS

COVINS exports a generic communication interface, that can be integrated into custom keyframe-based VIO systems in order to share map data with the server back-end and generate a collaborative estimate. The code for the communication interface is located in the covins_comm folder, which builds a library with the same name that facilitates communication between the VIO system onboard the agent and the COVINS server back-end.

For straightforward understanding which steps need to be taken to interface a VIO front-end with COVINS, we have defined the preprocessor macro COVINS_MOD in covins/covins_comm/include/covins/covins_base/typedefs_base.hpp. This macro indicates all modifications made to the original ORB-SLAM3 code in order to set up the communication with the server back-end.

In a nutshell, the communication interface provides a base communicator class, which is intended to be used to create a derived communicator class tailored to the VIO system. The communicator module runs in a separate thread, taking care of setting up a connection to the server, and exchanging map data. For the derived class, the user only needs to define a function that can be used to pass data to the communicator module and fill the provided data containers, and the Run() function that is continuously executed by the thread allocated to the communicator module. Furthermore, the communicator module uses the predefined message types MsgKeyframe and MsgLandmark for transmitting data to the server, therefore, the user needs to define functions that fill those messages from the custom data structures of the VIO system.

Map Re-Use Onboard the Agent

COVINS also provides the functionality to share data from the collaborative estimate on the server-side with the agents participating in the estimate. COVINS provides only the infrastructure to share this data, the method for map re-use needs to be implement by the user.

By default, COVINS is configured to not send any data back to the agent. By setting comm.data_to_client to 1 in config_comm.yaml, this functionality can be activated. By default, the server then regularly sends information about one keyframe back to the agent. The agent will display a message that requests the user to define what to do with the received information.

  • In the function CollectDataForAgent() in covins_backend/src/covins_backend/, the data to send to the agent can be specified.
  • In the function ProcessKeyframeMessages() in orb_slam3/src/, the processing of the received keyframe can be specified.

7 Limitations and Known Issues

  • [MAP RESET] ORB-SLAM3 has the functionality to start a new map when tracking is lost, in order to improve robustness. This functionality is not supported by COVINS. The COVINS server back-end assumes that keyframes arriving from a specific agent are shared in a continuous fashion and belong to the same map, and if the agent map is reset and a second keyframe with a previously used ID arrives again at the server side, the back-end will detect this inconsistency and throw an error. We have almost never experienced this behavior on the EuRoC sequences when using the ASL dataset format, and rarely when using rosbag files.
    • Too little computational resources available to the front-end can be a reason for more frequent map resets.
    • Map resets are more frequent at the beginning of a dataset, and occur less when the VIO front-end is well initialized and already tracking the agent's pose over some time. Therefore, the communication module will only start sending data to the server once a pre-specified number of keyframes was created by the VIO front-end. This number is specified by comm.start_sending_after_kf in covins/covins_comm/config/config_comm.yaml, and is currently set to 50.
    • Particularly when running with rosbag files, setting the parameter orb.imu_stamp_max_diff: 2.0 in covins/covins_comm/config/config_comm.yaml, instead of the default (1.0), helped to significantly reduce map resets. We did not see any negative impact on the accuracy of the COVINS collaborative estimate from this change.
  • [Duplicate Files] The repository contains 2 copies of the ORB vocabularies, as well as 2 versions of the DBoW library. We decided to use this structure in order to keep the code of COVINS and ORB-SLAM3 as much separated as possible.
  • [Mixed Notation] COVINS mainly utilizes the Google C++ Style Guide. However, some modules re-use code of other open-source software using Hungarian notation, such as CCM-SLAM and ORB-SLAM2, and this code was not ported to the new notation convention yet (particularly, this applies to code parts related to the FeatureMatcher, KFDatabase, PlaceRecognition, Se3Solver).
Owner
ETHZ V4RL
Vision for Robotics Lab, ETH Zurich
ETHZ V4RL
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
Fast Differentiable Matrix Sqrt Root

Fast Differentiable Matrix Sqrt Root Geometric Interpretation of Matrix Square Root and Inverse Square Root This repository constains the official Pyt

YueSong 42 Dec 30, 2022
Official PyTorch implementation of GDWCT (CVPR 2019, oral)

This repository provides the official code of GDWCT, and it is written in PyTorch. Paper Image-to-Image Translation via Group-wise Deep Whitening-and-

WonwoongCho 135 Dec 02, 2022
"Neural Turing Machine" in Tensorflow

Neural Turing Machine in Tensorflow Tensorflow implementation of Neural Turing Machine. This implementation uses an LSTM controller. NTM models with m

Taehoon Kim 1k Dec 06, 2022
Speech Recognition using DeepSpeech2.

deepspeech.pytorch Implementation of DeepSpeech2 for PyTorch using PyTorch Lightning. The repo supports training/testing and inference using the DeepS

Sean Naren 2k Jan 04, 2023
Re-TACRED: Addressing Shortcomings of the TACRED Dataset

Re-TACRED Re-TACRED: Addressing Shortcomings of the TACRED Dataset

George Stoica 40 Dec 10, 2022
The first machine learning framework that encourages learning ML concepts instead of memorizing class functions.

SeaLion is designed to teach today's aspiring ml-engineers the popular machine learning concepts of today in a way that gives both intuition and ways of application. We do this through concise algori

Anish 324 Dec 27, 2022
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

122 Dec 28, 2022
Code for binary and multiclass model change active learning, with spectral truncation implementation.

Model Change Active Learning Paper (To Appear) Python code for doing active learning in graph-based semi-supervised learning (GBSSL) paradigm. Impleme

Kevin Miller 1 Jul 24, 2022
🔥 Cannlytics-powered artificial intelligence 🤖

Cannlytics AI 🔥 Cannlytics-powered artificial intelligence 🤖 🏗️ Installation 🏃‍♀️ Quickstart 🧱 Development 🦾 Automation 💸 Support 🏛️ License ?

Cannlytics 3 Nov 11, 2022
This repository contains pre-trained models and some evaluation code for our paper Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Contriever: Towards Unsupervised Dense Information Retrieval with Contrastive Learning This repository contains pre-trained models and some evaluation

Meta Research 207 Jan 08, 2023
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks This repository contains a TensorFlow implementation of "

Jingwei Zheng 5 Jan 08, 2023
用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本和PARL(paddle)版本

用强化学习玩合成大西瓜 代码地址:https://github.com/Sharpiless/play-daxigua-using-Reinforcement-Learning 用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本、PARL(paddle)版本和pytorch版本

72 Dec 17, 2022
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
MusicYOLO framework uses the object detection model, YOLOx, to locate notes in the spectrogram.

MusicYOLO MusicYOLO framework uses the object detection model, YOLOX, to locate notes in the spectrogram. Its performance on the ISMIR2014 dataset, MI

Xianke Wang 2 Aug 02, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
An easy-to-use app to visualise attentions of various VQA models.

Ask Me Anything: A tool for visualising Visual Question Answering (AMA) An easy-to-use app to visualise attentions of various VQA models. Please click

Apoorve 37 Nov 13, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

49 Dec 01, 2022
Implementation of "Large Steps in Inverse Rendering of Geometry"

Large Steps in Inverse Rendering of Geometry ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), December 2021. Baptiste Nicolet · Alec Jacob

RGL: Realistic Graphics Lab 274 Jan 06, 2023
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch

Bootstrap Your Own Latent (BYOL), in Pytorch Practical implementation of an astoundingly simple method for self-supervised learning that achieves a ne

Phil Wang 1.4k Dec 29, 2022