[ICCV 2021] Deep Hough Voting for Robust Global Registration

Related tags

Deep LearningDHVR
Overview

Deep Hough Voting for Robust Global Registration, ICCV, 2021

Project Page | Paper | Video

Deep Hough Voting for Robust Global Registration
Junha Lee1, Seungwook Kim1, Minsu Cho1, Jaesik Park1
1POSTECH CSE & GSAI
in ICCV 2021

An Overview of the proposed pipeline

Overview

Point cloud registration is the task of estimating the rigid transformation that aligns a pair of point cloud fragments. We present an efficient and robust framework for pairwise registration of real-world 3D scans, leveraging Hough voting in the 6D transformation parameter space. First, deep geometric features are extracted from a point cloud pair to compute putative correspondences. We then construct a set of triplets of correspondences to cast votes on the 6D Hough space, representing the transformation parameters in sparse tensors. Next, a fully convolutional refinement module is applied to refine the noisy votes. Finally, we identify the consensus among the correspondences from the Hough space, which we use to predict our final transformation parameters. Our method outperforms state-of-the-art methods on 3DMatch and 3DLoMatch benchmarks while achieving comparable performance on KITTI odometry dataset. We further demonstrate the generalizability of our approach by setting a new state-of-the-art on ICL-NUIM dataset, where we integrate our module into a multi-way registration pipeline.

Citing our paper

@InProceedings{lee2021deephough, 
    title={Deep Hough Voting for Robust Global Registration},
    author={Junha Lee and Seungwook Kim and Minsu Cho and Jaesik Park},
    booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year={2021}
}

Experiments

Speed vs Accuracy Qualitative results
Table Accuracy vs. Speed

Installation

This repository is developed and tested on

  • Ubuntu 18.04
  • CUDA 11.1
  • Python 3.8.11
  • Pytorch 1.4.9
  • MinkowskiEngine 0.5.4

Environment Setup

Our pipeline is built on MinkowskiEngine. You can install the MinkowskiEngine and the python requirements on your system with:

# setup requirements for MinkowksiEngine
conda create -n dhvr python=3.8
conda install pytorch=1.9.1 torchvision cudatoolkit=11.1 -c pytorch -c nvidia
conda install numpy
conda install openblas-devel -c anaconda

# install MinkowskiEngine
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas"

# download and setup DHVR
git clone https://github.com/junha-l/DHVR.git
cd DHVR
pip install -r requirements.txt

We also depends on torch-batch-svd, an open-source library for 100x faster (batched) svd on GPU. You can follow the below instruction to install torch-batch-svd

# if your cuda installation directory is other than "/usr/local/cuda", you have to specify it.
(CUDA_HOME=PATH/TO/CUDA/ROOT) bash scripts/install_3rdparty.sh

3DMatch Dataset

Training

You can download preprocessed training dataset, which is provided by the author of FCGF, via these commands:

# download 3dmatch train set 
bash scripts/download_3dmatch.sh PATH/TO/3DMATCH
# create symlink
ln -s PATH/TO/3DMATCH ./dataset/3dmatch

Testing

The official 3DMatch test set is available at the official website. You should download fragments data of Geometric Registration Benchmark and decompress them to a new folder.

Then, create a symlink via following command:

ln -s PATH/TO/3DMATCH_TEST ./dataset/3dmatch-test

Train DHVR

The default feature extractor we used in our experiments is FCGF. You can download pretrained FCGF models via following commands:

bash scripts/download_weights.sh

Then, train with

python train.py config/train_3dmatch.gin --run_name NAME_OF_EXPERIMENT

Test DHVR

You can test DHVR via following commands:

3DMatch

python test.py config/test_3dmatch.gin --run_name EXP_NAME --load_path PATH/TO/CHECKPOINT

3DLoMatch

python test.py config/test_3dlomatch.gin --run_name EXP_NAME --load_path PATH/TO/CHECKPOINT

Pretrained Weights

We also provide pretrained weights on 3DMatch dataset. You can download the checkpoint in following link.

Acknowledments

Our code is based on the MinkowskiEngine. We also refer to FCGF, DGR, and torch-batch-svd.

Owner
Junha Lee
Junha Lee
code from "Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity"

Code associated with the paper "Tensor decomposition of higher-order correlations by nonlinear Hebbian learning," Ocker & Buice, Neurips 2021. "plot_f

Gabriel Koch Ocker 4 Oct 16, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022) This is the Pytorch code for our paper Beyond ImageNet

Alibaba-AAIG 37 Nov 23, 2022
Generative Flow Networks for Discrete Probabilistic Modeling

Energy-based GFlowNets Code for Generative Flow Networks for Discrete Probabilistic Modeling by Dinghuai Zhang, Nikolay Malkin, Zhen Liu, Alexandra Vo

Narsil-Dinghuai Zhang 51 Dec 20, 2022
Python codes for Lite Audio-Visual Speech Enhancement.

Lite Audio-Visual Speech Enhancement (Interspeech 2020) Introduction This is the PyTorch implementation of Lite Audio-Visual Speech Enhancement (LAVSE

Shang-Yi Chuang 85 Dec 01, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).

Microsoft365_devicePhish Abusing Microsoft 365 OAuth Authorization Flow for Phishing Attack This is a simple proof-of-concept script that allows an at

Alex 236 Dec 21, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Introduction 1. Usage (For MSS) 1.1 Prepare running environment 1.2 Use pretrained model 1.3 Train new MSS models from scratch 1.3.1 How to train 1.3.

Leo 100 Dec 25, 2022
Type4Py: Deep Similarity Learning-Based Type Inference for Python

Type4Py: Deep Similarity Learning-Based Type Inference for Python This repository contains the implementation of Type4Py and instructions for re-produ

Software Analytics Lab 45 Dec 15, 2022
Scalable Multi-Agent Reinforcement Learning

Scalable Multi-Agent Reinforcement Learning 1. Featured algorithms: Value Function Factorization with Variable Agent Sub-Teams (VAST) [1] 2. Implement

3 Aug 02, 2022
This project hosts the code for implementing the ISAL algorithm for object detection and image classification

Influence Selection for Active Learning (ISAL) This project hosts the code for implementing the ISAL algorithm for object detection and image classifi

25 Sep 11, 2022
Dilated Convolution with Learnable Spacings PyTorch

Dilated-Convolution-with-Learnable-Spacings-PyTorch Ismail Khalfaoui Hassani Dilated Convolution with Learnable Spacings (abbreviated to DCLS) is a no

15 Dec 09, 2022
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 02, 2022
GEP (GDB Enhanced Prompt) - a GDB plug-in for GDB command prompt with fzf history search, fish-like autosuggestions, auto-completion with floating window, partial string matching in history, and more!

GEP (GDB Enhanced Prompt) GEP (GDB Enhanced Prompt) is a GDB plug-in which make your GDB command prompt more convenient and flexibility. Why I need th

Alan Li 23 Dec 21, 2022
Official repository for the paper F, B, Alpha Matting

FBA Matting Official repository for the paper F, B, Alpha Matting. This paper and project is under heavy revision for peer reviewed publication, and s

Marco Forte 404 Jan 05, 2023
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course

Deep Learning with TensorFlow 2 and Keras – Notebooks This project accompanies my Deep Learning with TensorFlow 2 and Keras trainings. It contains the

Aurélien Geron 1.9k Dec 15, 2022
Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques"

THESIS_CAIRONE_FIORENTINO Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques" GENERATE TOKE

cairone_fiorentino97 1 Dec 10, 2021
Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code

Learning the Beauty in Songs: Neural Singing Voice Beautifier Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao Zhejiang University ACL 2022 Mai

Jinglin Liu 257 Dec 30, 2022
Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB)

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022