Contains code for Deep Kernelized Dense Geometric Matching

Related tags

Deep LearningDKM
Overview

DKM - Deep Kernelized Dense Geometric Matching

Contains code for Deep Kernelized Dense Geometric Matching

We provide pretrained models and code for evaluation and running on your own images. We do not curently provide code for training models, but you can basically copy paste the model code into your own training framework and run it.

Note that the performance of current models is greater than in the pre-print. This is due to continued development since submission.

Install

Run pip install -e .

Using a (Pretrained) Model

Models can be imported by:

from dkm import dkm_base
model = dkm_base(pretrained=True, version="v11")

This creates a model, and loads pretrained weights.

Running on your own images

from dkm import dkm_base
from PIL import Image
model = dkm_base(pretrained=True, version="v11")
im1, im2 = Image.open("im1.jpg"), Image.open("im2.jpg")
# Note that matches are produced in the normalized grid [-1, 1] x [-1, 1] 
dense_matches, dense_certainty = model.match(im1, im2)
# You may want to process these, e.g. we found dense_certainty = dense_certainty.sqrt() to work quite well in some cases.
# Sample 10000 sparse matches
sparse_matches, sparse_certainty = model.sample(dense_matches, dense_certainty, 10000)

Downloading Benchmarks

HPatches

First, make sure that the "data/hpatches" path exists. I usually prefer to do this by:

ln -s place/where/your/datasets/are/stored/hpatches data/hpatches

Then run (if you don't already have hpatches downloaded) bash scripts/download_hpatches.sh

Yfcc100m (OANet Split)

We use the split introduced by OANet, this split can be found from e.g. https://github.com/PruneTruong/DenseMatching

Megadepth (LoFTR Split)

Currently we do not support the LoFTR split, as we trained on one of the scenes used there. Future releases may support this split, stay tuned.

Scannet (SuperGlue Split)

We use the same split of scannet as superglue. LoFTR provides the split here: https://drive.google.com/drive/folders/1nTkK1485FuwqA0DbZrK2Cl0WnXadUZdc

Evaluation

Here we provide approximate performance numbers for DKM using this codebase. Note that the randomness involved in geometry estimation means that the numbers are not exact. (+- 0.5 typically)

HPatches

To evaluate on HPatches Homography Estimation, run:

from dkm import dkm_base
from dkm.benchmarks import HpatchesHomogBenchmark

model = dkm_base(pretrained=True, version="v11")
homog_benchmark = HpatchesHomogBenchmark("data/hpatches")
homog_benchmark.benchmark_hpatches(model)

Results

HPatches Homography Estimation

AUC
@3px @5px @10px
LoFTR (CVPR'21) 65.9 75.6 84.6
DKM (Ours) 71.2 80.6 88.7

Scannet Pose Estimation

Here we compare the performance on Scannet of models not trained on Scannet. (For reference we also include the version LoFTR specifically trained on Scannet)

AUC mAP
@5 @10 @20 @5 @10 @20
SuperGlue (CVPR'20) Trained on Megadepth 16.16 33.81 51.84 - - -
LoFTR (CVPR'21) Trained on Megadepth 16.88 33.62 50.62 - - -
LoFTR (CVPR'21) Trained on Scannet 22.06 40.8 57.62 - - -
PDCNet (CVPR'21) Trained on Megadepth 17.70 35.02 51.75 39.93 50.17 60.87
PDCNet+ (Arxiv) Trained on Megadepth 19.02 36.90 54.25 42.93 53.13 63.95
DKM (Ours) Trained on Megadepth 22.3 42.0 60.2 48.4 59.5 70.3
DKM (Ours) Trained on Megadepth Square root Confidence Sampling 22.9 43.6 61.4 51.2 62.1 72.0

Yfcc100m Pose Estimation

Here we compare to recent methods using a single forward pass. PDC-Net+ using multiple passes comes closer to our method, reaching AUC-5 of 37.51. However, comparing to that method is somewhat unfair as their inference is much slower.

AUC mAP
@5 @10 @20 @5 @10 @20
PDCNet (CVPR'21) 32.21 52.61 70.13 60.52 70.91 80.30
PDCNet+ (Arxiv) 34.76 55.37 72.55 63.93 73.81 82.74
DKM (Ours) 40.0 60.2 76.2 69.8 78.5 86.1

TODO

  • Add Model Code
  • Upload Pretrained Models
  • Add HPatches Homography Benchmark
  • Add More Benchmarks

Acknowledgement

We have used code and been inspired by (among others) https://github.com/PruneTruong/DenseMatching , https://github.com/zju3dv/LoFTR , and https://github.com/GrumpyZhou/patch2pix

BibTeX

If you find our models useful, please consider citing our paper!

@article{edstedt2022deep,
  title={Deep Kernelized Dense Geometric Matching},
  author={Edstedt, Johan and Wadenb{\"a}ck, M{\aa}rten and Felsberg, Michael},
  journal={arXiv preprint arXiv:2202.00667},
  year={2022}
}
Owner
Johan Edstedt
PhD Student at CVL LiU.
Johan Edstedt
Torchreid: Deep learning person re-identification in PyTorch.

Torchreid Torchreid is a library for deep-learning person re-identification, written in PyTorch. It features: multi-GPU training support both image- a

Kaiyang 3.7k Jan 05, 2023
Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them

TensorFlow Serving + Streamlit! ✨ 🖼️ Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them! This is a pretty simple S

Álvaro Bartolomé 18 Jan 07, 2023
Implementation of paper "Graph Condensation for Graph Neural Networks"

GCond A PyTorch implementation of paper "Graph Condensation for Graph Neural Networks" Code will be released soon. Stay tuned :) Abstract We propose a

Wei Jin 66 Dec 04, 2022
Stock-Prediction - prediction of stock market movements using sentiment analysis and deep learning.

Stock-Prediction- In this project, we aim to enhance the prediction of stock market movements using sentiment analysis and deep learning. We divide th

5 Jan 25, 2022
TensorFlow implementation of "Attention is all you need (Transformer)"

[TensorFlow 2] Attention is all you need (Transformer) TensorFlow implementation of "Attention is all you need (Transformer)" Dataset The MNIST datase

YeongHyeon Park 4 Jan 05, 2022
Export CenterPoint PonintPillars ONNX Model For TensorRT

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I impleme

CarkusL 149 Dec 13, 2022
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
Unofficial Pytorch Implementation of WaveGrad2

WaveGrad 2 — Unofficial PyTorch Implementation WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis Unofficial PyTorch+Lightning Implementati

MINDs Lab 104 Nov 29, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
Inferring Lexicographically-Ordered Rewards from Preferences

Inferring Lexicographically-Ordered Rewards from Preferences Code author: Alihan Hüyük ([e

Alihan Hüyük 1 Feb 13, 2022
Multi-Modal Fingerprint Presentation Attack Detection: Evaluation On A New Dataset

PADISI USC Dataset This repository analyzes the PADISI-Finger dataset introduced in Multi-Modal Fingerprint Presentation Attack Detection: Evaluation

USC ISI VISTA Computer Vision 6 Feb 06, 2022
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

ZhouYanzhao 217 Dec 12, 2022
Inference pipeline for our participation in the FeTA challenge 2021.

feta-inference Inference pipeline for our participation in the FeTA challenge 2021. Team name: TRABIT Installation Download the two folders in https:/

Lucas Fidon 2 Apr 13, 2022
Supervised & unsupervised machine-learning techniques are applied to the database of weighted P4s which admit Calabi-Yau hypersurfaces.

Weighted Projective Spaces ML Description: The database of 5-vectors describing 4d weighted projective spaces which admit Calabi-Yau hypersurfaces are

Ed Hirst 3 Sep 08, 2022
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification Created by Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Ch

Yongming Rao 414 Jan 01, 2023
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
Deep Learning Models for Causal Inference

Extensive tutorials for learning how to build deep learning models for causal inference using selection on observables in Tensorflow 2.

Bernard J Koch 151 Dec 31, 2022