On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization

Overview

On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization

This repository contains the evaluation code and alternative pseudo ground truth poses as used in our ICCV 2021 paper.

video overview

Pseudo Ground Truth for 7Scenes and 12Scenes

We generated alternative SfM-based pseudo ground truth (pGT) using Colmap to supplement the original D-SLAM-based pseudo ground truth of 7Scenes and 12Scenes.

Pose Files

Please find our SfM pose files in the folder pgt. We separated pGT files wrt datasets, individual scenes and the test/training split. Each file contains one line per image that follows the format:

rgb_file qw qx qy qz tx ty tz f

Entries q and t represent the pose as quaternion and translation vector. The pose maps world coordinates to camera coordinates, i.e. p_cam = R(q) p_world + t. This is the same convention used by Colmap. Entry f represents the focal length of the RGB sensor. f was re-estimated by COLMAP and can differ slightly per scene.

We also provide the original D-SLAM pseudo ground truth in this format to be used with our evaluation code below.

Full Reconstructions

The Colmap 3D models are available here:

Note that the Google Drive folder that currently hosts the reconstructions has a daily download limit. We are currently looking into alternative hosting options.

License Information

Since the 3D models and pose files are derived from the original datasets, they are released under the same licences as the 7Scenes and 12Scenes datasets. Before using the datasets, please check the licenses (see the websites of the datasets or the README.md files that come with the 3D models).

Evaluation Code

The main results of our paper can be reproduced using evaluate_estimates.py. The script calculates either the pose error (max of rotation and translation error) or the DCRE error (dense reprojection error). The script prints the recall at a custom threshold to the console, and produces a cumulative error plot as a PDF file.

As input, the script expects a configuration file that points to estimated poses of potentially multiple algorithms and to the pseudo ground truth that these estimates should be compared to. We provide estimated poses of all methods shown in our paper (ActiveSearch, HLoc, R2D2 and DSAC*) in the folder estimates.
These pose files follow the same format as our pGT files described previously, but omit the final f entry.

Furthermore, we provide example config files corresponding to the main experiments in our paper.

Call python evaluate_estimates.py --help for all available options.

For evaluation on 7Scenes, using our SfM pGT, call:

python evaluate_estimates.py config_7scenes_sfm_pgt.json

This produces a new file config_7scenes_sfm_pgt_pose_err.pdf:

For the corresponding plot using the original D-SLAM pGT, call:

python evaluate_estimates.py config_7scenes_dslam_pgt.json

Interpreting the Results

The plots above show very different rankings across methods. Yet, as we discuss in our paper, both plots are valid since no version of the pGT is clearly superior to the other. Furthermore, it appears plausible that any version of pGT is only trustworthy up to a certain accuracy threshold. However, it is non-obvious and currently unknown, how to determine such a trust threshold. We thus strongly discourage to draw any conclusions (beyond that a method might be overfitting to the imperfections of the pseudo ground truth) from the smaller thresholds alone.

We advise to always evaluate methods under both versions of the pGT, and to show both evaluation results in juxtaposition unless specific reasons are given why one version of the pGT is preferred.

DCRE Computation

DCRE computation is triggered with the option --error_type dcre_max or --error_type dcre_mean (see our paper for details). DCRE needs access to the original 7Scenes or 12Scenes data as it requires depth maps. We provide two utility scripts, setup_7scenes.py and setup_12scenes.py, that will download and unpack the associated datasets. Make sure to check each datasets license, via the links above, before downloading and using them.

Note I: The original depth files of 7Scenes are not calibrated, but the DCRE requires calibrated files. The setup script will apply the Kinect calibration parameters found here to register depth to RGB. This essentially involves re-rendering the depth maps which is implemented in native Python and takes a long time due to the large frame count in 7Scenes (several hours). However, this step has to be done only once.

Note II: The DCRE computation by evaluate_estimates.py is implemented on the GPU and reasonably fast. However, due to the large frame count in 7Scenes it can still take considerable time. The parameter --error_max_images limits the max. number of frames used to calculate recall and cumulative errors. The default value of 1000 provides a good tradeoff between accuracy and speed. Use --error_max_images -1 to use all images which is most accurate but slow for 7Scenes.

Uploading Your Method's Estimates

We are happy to include updated evaluation results or evaluation results of new methods in this repository. This would enable easy comparisons across methods with unified evaluation code, as we progress in the field.

If you want your results included, please provide estimates of your method under both pGT versions via a pull request. Please add your estimation files to a custom sub-folder under èstimates_external, following our pose file convention described above. We would also ask that you provide a text file that links your results to a publication or tech report, or contains a description of how you obtained these results.

estimates_external
├── someone_elses_method
└── your_method
    ├── info_your_method.txt
    ├── dslam
    │   ├── 7scenes
    │   │   ├── chess_your_method.txt
    │   │   ├── fire_your_method.txt
    │   │   ├── ...
    │   └── 12scenes
    │       ├── ...
    └── sfm
        ├── ...

Dependencies

This code requires the following python packages, and we tested it with the package versions in brackets

pytorch (1.6.0)
opencv (3.4.2)
scikit-image (0.16.2)

The repository contains an environment.yml for the use with Conda:

conda env create -f environment.yml
conda activate pgt

License Information

Our evaluation code and data utility scripts are based on parts of DSAC*, and we provide our code under the same BSD-3 license.

Citation

If you are using either the evaluation code or the Structure-from-Motion pseudo GT for the 7Scenes or 12Scenes datasets, please cite the following work:

@InProceedings{Brachmann2021ICCV,
    author = {Brachmann, Eric and Humenberger, Martin and Rother, Carsten and Sattler, Torsten},
    title = {{On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization}},
    booktitle = {International Conference on Computer Vision (ICCV)},
    year = {2021},
}
Owner
Torsten Sattler
I am a senior researcher at CIIRC, the Czech Institute of Informatics, Robotics and Cybernetics, building my own research group.
Torsten Sattler
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 03, 2023
NICE-GAN — Official PyTorch Implementation Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

NICE-GAN-pytorch - Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

Runfa Chen 208 Nov 25, 2022
Codes for the AAAI'22 paper "TransZero: Attribute-guided Transformer for Zero-Shot Learning"

TransZero [arXiv] This repository contains the testing code for the paper "TransZero: Attribute-guided Transformer for Zero-Shot Learning" accepted to

Shiming Chen 52 Jan 01, 2023
Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information"

Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information" Notes I probabl

Berkeley Expert System Technologies Lab 0 Jul 01, 2021
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
An evaluation toolkit for voice conversion models.

Voice-conversion-evaluation An evaluation toolkit for voice conversion models. Sample test pair Generate the metadata for evaluating models. The direc

30 Aug 29, 2022
[ICCV'2021] "SSH: A Self-Supervised Framework for Image Harmonization", Yifan Jiang, He Zhang, Jianming Zhang, Yilin Wang, Zhe Lin, Kalyan Sunkavalli, Simon Chen, Sohrab Amirghodsi, Sarah Kong, Zhangyang Wang

SSH: A Self-Supervised Framework for Image Harmonization (ICCV 2021) code for SSH Representative Examples Main Pipeline RealHM DataSet Google Drive Pr

VITA 86 Dec 02, 2022
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory b

Neural Magic 1.2k Jan 09, 2023
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022
List some popular DeepFake models e.g. DeepFake, FaceSwap-MarekKowal, IPGAN, FaceShifter, FaceSwap-Nirkin, FSGAN, SimSwap, CihaNet, etc.

deepfake-models List some popular DeepFake models e.g. DeepFake, CihaNet, SimSwap, FaceSwap-MarekKowal, IPGAN, FaceShifter, FaceSwap-Nirkin, FSGAN, Si

Mingcan Xiang 100 Dec 17, 2022
Human-Pose-and-Motion History

Human Pose and Motion Scientist Approach Eadweard Muybridge, The Galloping Horse Portfolio, 1887 Etienne-Jules Marey, Descent of Inclined Plane, Chron

Daito Manabe 47 Dec 16, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 09, 2022
A simple baseline for 3d human pose estimation in tensorflow. Presented at ICCV 17.

3d-pose-baseline This is the code for the paper Julieta Martinez, Rayat Hossain, Javier Romero, James J. Little. A simple yet effective baseline for 3

Julieta Martinez 1.3k Jan 03, 2023
PyTorch and GPyTorch implementation of the paper "Conditioning Sparse Variational Gaussian Processes for Online Decision-making."

Conditioning Sparse Variational Gaussian Processes for Online Decision-making This repository contains a PyTorch and GPyTorch implementation of the pa

Wesley Maddox 16 Dec 08, 2022
A deep learning object detector framework written in Python for supporting Land Search and Rescue Missions.

AIR: Aerial Inspection RetinaNet for supporting Land Search and Rescue Missions AIR is a deep learning based object detection solution to automate the

Accenture 13 Dec 22, 2022
《Towards High Fidelity Face Relighting with Realistic Shadows》(CVPR 2021)

Towards High Fidelity Face-Relighting with Realistic Shadows Andrew Hou, Ze Zhang, Michel Sarkis, Ning Bi, Yiying Tong, Xiaoming Liu. In CVPR, 2021. T

114 Dec 10, 2022
This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised and Tiny ML scenarios"

TinyWeaklyIsolationForest This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised a

2 Mar 21, 2022
NIMA: Neural IMage Assessment

PyTorch NIMA: Neural IMage Assessment PyTorch implementation of Neural IMage Assessment by Hossein Talebi and Peyman Milanfar. You can learn more from

Kyryl Truskovskyi 293 Dec 30, 2022
NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows This repo contains the code for the paper Tractable Densit

Layer6 Labs 4 Dec 12, 2022
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021