Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

Related tags

Deep Learningsuo_slam
Overview

SUO-SLAM

This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv link.

Citation

If you use any part of this repository in an academic work, please cite our paper as:

@inproceedings{Merrill2022CVPR,
  Title      = {Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation},
  Author     = {Nathaniel Merrill and Yuliang Guo and Xingxing Zuo and Xinyu Huang and Stefan Leutenegger and Xi Peng and Liu Ren and Guoquan Huang},
  Booktitle  = {2022 Conference on Computer Vision and Pattern Recognition (CVPR)},
  Year       = {2022},
  Address    = {New Orleans, USA},
  Month      = jun,
}

Installation

Click for details... This codebase was tested on Ubuntu 18.04. To use the BOP rendering (i.e. for keypoint labeling) install
sudo apt install libfreetype6-dev libglfw3

You will also need a python environment that contains the required packages. To see what packages we used, check out the list of requirements in requirements.txt. They can be installed via pip install -r requirements.txt

Preparing Data

Click for details...

Datasets

To be able to run the training and testing (i.e. single view or with SLAM), first decide on a place to download the data to. The disk will need a few hundred GB of space for all the data (at least 150GB for download and more to extract). All of our code expects the data to be in a local directory ./data, but you can of course symlink this to another location (perhaps with more disk space). So, first of all, in the root of this repo run

$ mkdir data

or to symlink to an external location

$ ln -s /path/to/drive/with/space/ ./data

You can pick and choose what data you want to download (for example just T-LESS or YCBV). Note that all YCBV and TLESS downloads have our keypoint labels packaged along with the data. Download the following google drive links into ./data and extract them.

When all is said and done, the tree should look like this

$ cd ./data && tree --filelimit 3
.
├── bop_datasets
│   ├── tless 
│   └── ycbv 
├── saved_detections
└── VOCdevkit
    └── VOC2012

Pre-trained models

You can download the pretrained models anywhere, but I like to keep them in the results directory that is written to during training.

Training

Click for details...

First set the default arguments in ./lib/args.py for your username if desired, then execute

$ ./train.py

with the appropriate arguments for your filesystem. You can also run

$ ./train.py -h

for a full list of arguments and their meaning. Some important args are batch_size, which is the number of images loaded for each training batch. Note that there may be a variable number of objects in each image, and the objects are all stacked together into one big batch to run the network -- so the actual batch size being run might be multiple times batch_size. In order to keep batch_size reasonably large, we provide another arg called truncate_obj, which, as the help says, truncates the object batches to this number if it exceeds it. We recommend that you start with a large batch size so that you can find out the maximum truncate_obj for you GPUs, then reduce the batch size until there are little to no warnings about too many objects being truncated.

Evaluation

Click for details...

Before you can evaluate in a single-view or SLAM fashion, you will need to build the thirdparty libraries for PnP and graph optimization. First make sure that you have CERES solver installed. The run

$ ./build_thirdparty.sh

Reproducing Results

To reproduce the results of the paper with the pretrained models, check out the scripts under the scripts directory:

eval_all_tless.sh  eval_all_ycbv.sh  make_video.sh

These will reproduce most of the results in the paper as well as any video clips you want. You may have to change the first few lines of each script. Note that these examples can also show you the proper arguments if you want to run from command line alone.

Note that for the T-LESS dataset, we use the thirdparty BOP toolkit to get the VSD error recall, which will show up in the final terminal output as "Mean object recall" among other numbers.

Labeling

Click for details...

Overview

We manually label keypoints on the CAD model to enable some keypoints with semantic meaning. For the full list of keypoint meanings, see the specific README

We provide our landmark labeling tool. Check out the script manual_keypoints.py. This same script can be used to make a visualization of the keypoints as shown below with the --viz option.

The script will show a panel of the same object but oriented slightly differently. The idea is that you pick the same keypoint multiple times to ensure correctness and to get a better label by averaging multiple samples.

The script will also print the following directions to follow in the terminal.

============= Welcome ===============
Select the keypoints with a left click!
Use the "wasd" to turn the objects.
Press "i" to zoom in and "o" to zoom out.
Make sure that the keypoint colors match between all views.
Messed up? Just press 'u' to undo.
Press "Enter" to finish and save the keypoints
Press "Esc" to just quit

Once you have pressed "enter", you will get to an inspection pane.

Where the unscaled mean keypoints are on the left, and the ones scaled by covariance is on the left, where the ellipses are the Gaussian 3-sigma projected onto the image. If the covariance is too large, or the mean is out of place, then you may have messed up. Again, the program will print out these directions to terminal:

Inspect the results!
Use the "wasd" to turn the object.
Press "i" to zoom in and "o" to zoom out.
Press "Esc" to go back, "Enter" to accept (saving keypoints and viewpoint for vizualization).
Please pick a point on the object!

So if you are done, and the result looks good, then press "Enter", if not then "Esc" to go back. Make sure also that when you are done, you rotate and scale the object into the best "view pose" (with the front facing the camera, and top facing up), as this pose is used by both the above vizualization and the actual training code for determining the best symmetry to pick for an initial detection.

Labeling Tips

Even though there are 8 panels, you don't need to fill out all 8. Each keypoint just needs at least 3 samples to sample the covariance.

We recommend that you label the same keypoint (say keypoint i) on all the object renderings first, then go to the inspection panel at the end of this each time so that you can easily undo a mistake for keypoint i with the "u" key and not lose any work. Otherwise, if you label each object rendering completely, then you may have to undo a lot of labelings that were not mistakes.

Also, if there is an object that you want to label a void in the CAD model, like the top center of the bowl, then you can use the multiple samples to your advantage, and choose samples that will average to the desired result, since the labels are required to land on the actual CAD model in the labeling tool.

<\details>

Owner
Robot Perception & Navigation Group (RPNG)
Research on robot sensing, estimation, localization, mapping, perception, and planning
Robot Perception & Navigation Group (RPNG)
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
Learning Tracking Representations via Dual-Branch Fully Transformer Networks

Learning Tracking Representations via Dual-Branch Fully Transformer Networks DualTFR ⭐ We achieves the runner-ups for both VOT2021ST (short-term) and

phiphi 19 May 04, 2022
Repository for the electrical and ICT benchmark model developed in the ERIGrid 2.0 project.

Benchmark Model Electrical and ICT System This repository contains the documentation, code, and models for the electrical and ICT benchmark model deve

ERIGrid 2.0 1 Nov 29, 2021
GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape Completion

GarmentNets This repository contains the source code for the paper GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape

Columbia Artificial Intelligence and Robotics Lab 43 Nov 21, 2022
A font family with a great monospaced variant for programmers.

Fantasque Sans Mono A programming font, designed with functionality in mind, and with some wibbly-wobbly handwriting-like fuzziness that makes it unas

Jany Belluz 6.3k Jan 08, 2023
Public repo for the ICCV2021-CVAMD paper "Is it Time to Replace CNNs with Transformers for Medical Images?"

Is it Time to Replace CNNs with Transformers for Medical Images? Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (C

Christos Matsoukas 80 Dec 27, 2022
deep_image_prior_extension

Code for "Is Deep Image Prior in Need of a Good Education?" Project page: https://jleuschn.github.io/docs.educated_deep_image_prior/. Supplementary Ma

riccardo barbano 7 Jan 09, 2022
Fully convolutional networks for semantic segmentation

FCN-semantic-segmentation Simple end-to-end semantic segmentation using fully convolutional networks [1]. Takes a pretrained 34-layer ResNet [2], remo

Kai Arulkumaran 186 Dec 25, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
Pytorch Implementation of PointNet and PointNet++++

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for PointNet and PointNet++ in pytorch. Update 2021/03/27: (1) Release p

Luigi Ariano 1 Nov 11, 2021
Official repository for the ICCV 2021 paper: UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model.

UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model Official repository for the ICCV 2021 paper: UltraPose: Syn

MomoAILab 92 Dec 21, 2022
A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"

LASAFT-Net-v2 Listen, Attend and Separate by Attentively aggregating Frequency Transformation Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chun

Woosung Choi 29 Jun 04, 2022
An implementation of Deep Forest 2021.2.1.

Deep Forest (DF) 21 DF21 is an implementation of Deep Forest 2021.2.1. It is designed to have the following advantages: Powerful: Better accuracy than

LAMDA Group, Nanjing University 795 Jan 03, 2023
StarGAN v2 - Official PyTorch Implementation (CVPR 2020)

StarGAN v2 - Official PyTorch Implementation StarGAN v2: Diverse Image Synthesis for Multiple Domains Yunjey Choi*, Youngjung Uh*, Jaejun Yoo*, Jung-W

Clova AI Research 3.1k Jan 09, 2023
The official PyTorch implementation for NCSNv2 (NeurIPS 2020)

Improved Techniques for Training Score-Based Generative Models This repo contains the official implementation for the paper Improved Techniques for Tr

174 Dec 26, 2022
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-ba

PyKale 370 Dec 27, 2022
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 05, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022