PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds

Related tags

Deep LearningPCAM
Overview

PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds

PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds
Anh-Quan Cao1,2, Gilles Puy1, Alexandre Boulch1, Renaud Marlet1,3
1valeo.ai, France and 2Inria, France and 3ENPC, France

If you find this code or work useful, please cite our paper:

@inproceedings{cao21pcam,
  title={{PCAM}: {P}roduct of {C}ross-{A}ttention {M}atrices for {R}igid {R}egistration of {P}oint {C}louds},
  author={Cao, Anh-Quan and Puy, Gilles and Boulch, Alexandre and Marlet, Renaud},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021},
}

Preparation

Installation

  1. This code was implemented with python 3.7, pytorch 1.6.0 and CUDA 10.2. Please install PyTorch.
pip install torch==1.6.0 torchvision==0.7.0
  1. A part of the code (voxelisation) is using MinkowskiEngine 0.4.3. Please install it on your system.
sudo apt-get update
sudo apt install libgl1-mesa-glx
sudo apt install libopenblas-dev g++-7
export CXX=g++-7 
pip install -U MinkowskiEngine==0.4.3 --install-option="--blas=openblas" -v
  1. Clone this repository and install the additional dependencies:
$ git clone https://github.com/valeoai/PCAM.git
$ cd PCAM/
$ pip install -r requirements.txt
  1. Install lightconvpoint [5], which is an early version of FKAConv:
$ pip install -e ./lcp
  1. Finally, install pcam:
$ pip install -e ./

You can edit pcam's code on the fly and import function and classes of pcam in other project as well.

Datasets

3DMatch and KITTI

Follow the instruction on DGR github repository to download both datasets.

Place 3DMatch in the folder /path/to/pcam/data/3dmatch/, which should have the structure described here.

Place KITTI in the folder /path/to/pcam/data/kitti/, which should have the structure described here.

You can create soft links with the command ln -s if the datasets are stored somewhere else on your system.

For these datasets, we use the same dataloaders as in DGR [1-3], up to few modifications for code compatibility.

Modelnet40

Download the dataset here and unzip it in the folder /path/to/pcam/data/modelnet/, which should have the structure described here.

Again, you can create soft links with the command ln -s if the datasets are stored somewhere else on your system.

For this dataset, we use the same dataloader as in PRNet [4], up to few modifications for code compatibility.

Pretrained models

Download PCAM pretrained models here and unzip the file in the folder /path/to/pcam/trained_models/, which should have the structure described here.

Testing PCAM

As we randomly subsample the point clouds in PCAM, there are some slight variations from one run to another. In our paper, we ran 3 independent evaluations on the complete test set and averaged the scores.

3DMatch

We provide two different pre-trained models for 3DMatch: one for PCAM-sparse and one for PCAM-soft, both trained using 4096 input points.

To test the PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft.yaml

To test the PCAM-sparse model on the test set of , type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/sparse.yaml

Optional

As in DGR [1], the results can be improved using different levels of post-processing.

  1. Keeping only the pairs of points with highest confidence score (the threshold was optimised on the validation set of 3DMatch).
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_filter.yaml
$ python eval.py with ../configs/3dmatch/sparse_filter.yaml
  1. Using in addition the refinement by optimisation proposed by DGR [1].
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_refinement.yaml
$ python eval.py with ../configs/3dmatch/sparse_refinement.yaml
  1. Using as well the safeguard proposed by DGR [1].
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_safeguard.yaml
$ python eval.py with ../configs/3dmatch/sparse_safeguard.yaml

Note: For a fair comparison, we fixed the safeguard condition so that it is applied on the same proportion of scans as in DGR [1].

KITTI

We provide two different pre-trained models for KITTI: one for PCAM-sparse and one for PCAM-soft, both trained using 2048 input points.

To test the PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/soft.yaml

To test the PCAM-sparse model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/sparse.yaml

Optional

As in DGR [1], the results can be improved by refining the results using ICP.

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/soft_icp.yaml
$ python eval.py with ../configs/kitti/sparse_icp.yaml 

ModelNet40

There exist 3 different variants of this dataset. Please refer to [4] for the construction of these variants.

Unseen objects

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft.yaml
$ python eval.py with ../configs/modelnet/sparse.yaml

Unseen categories

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft_unseen.yaml
$ python eval.py with ../configs/modelnet/sparse_unseen.yaml

Unseen objects with noise

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft_noise.yaml
$ python eval.py with ../configs/modelnet/sparse_noise.yaml

Training

The models are saved in the folder /path/to/pcam/trained_models/new_training/{DATASET}/{CONFIG}, where {DATASET} is the name of the dataset and {CONFIG} give a description of the PCAM architecture and the losses used for training.

3DMatch

To train a PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/3dmatch/soft.yaml

You can then test this new model by typing:

$ python eval.py with ../configs/3dmatch/soft.yaml PREFIX='new_training'

To train a PCAM-sparse model, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/3dmatch/sparse.yaml

Training took about 12 days on a Nvidia Tesla V100S-32GB.

You can then test this new model by typing:

$ python eval.py with ../configs/3dmatch/sparse.yaml PREFIX='new_training'

KITTI

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/kitti/soft.yaml
$ python train.py with ../configs/kitti/sparse.yaml

Training took about 1 day on a Nvidia GeForce RTX 2080 Ti.

You can then test these new models by typing:

$ python eval.py with ../configs/kitti/soft.yaml PREFIX='new_training'
$ python eval.py with ../configs/kitti/sparse.yaml PREFIX='new_training'

ModelNet

Training PCAM on ModelNet took about 10 hours on Nvidia GeForce RTX 2080.

Unseen objects

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse.yaml PREFIX='new_training'

Unseen categories

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft_unseen.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse_unseen.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft_unseen.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse_unseen.yaml PREFIX='new_training'

Unseen objects with noise

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft_noise.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse_noise.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft_noise.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse_noise.yaml PREFIX='new_training'

References

[1] Christopher Choy, Wei Dong, Vladlen Koltun. Deep Global Registration, CVPR, 2020.

[2] Christopher Choy, Jaesik Park, Vladlen Koltun. Fully Convolutional Geometric Features. ICCV, 2019.

[3] Christopher Choy, JunYoung Gwak, Silvio Savarese. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR, 2019.

[4] Yue Wang and Justin M. Solomon. PRNet: Self-Supervised Learning for Partial-to-Partial Registration. NeurIPS, 2019.

[5] Alexandre Boulch, Gilles Puy, Renaud Marlet. FKAConv: Feature-Kernel Alignment for Point Cloud Convolution. ACCV, 2020.

License

PCAM is released under the Apache 2.0 license.

You might also like...
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

Self-Supervised Learning for Domain Adaptation on Point-Clouds
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Rendering Point Clouds with Compute Shaders
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

LiDARTag Overview This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds (PDF)(arXiv). This wo

Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

Code for
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Comments
  • How to get the results in the paper?

    How to get the results in the paper?

    I use the eval method from the README, but the results is worse:

    SOFT result: RTE all: 2.6929195 RRE all 1.755938845188313 Recall: 0.8468468468468469 RTE: 0.30647033 RRE: 0.41620454047369715 Times: 0.27450611107738326

    Sparse Result: RTE all: 3.8984199 RRE all 2.97438877706469 Recall: 0.4900900900900901 RTE: 0.37603837 RRE: 0.4989037670898464 Times: 0.2832888589950377

    Do I need to modify any code to get the results showed in paper?

    opened by Outlande 3
Releases(v0.1)
Owner
valeo.ai
We are an international team based in Paris, conducting AI research for Valeo automotive applications, in collaboration with world-class academics.
valeo.ai
《Image2Reverb: Cross-Modal Reverb Impulse Response Synthesis》(2021)

Image2Reverb Image2Reverb is an end-to-end neural network that generates plausible audio impulse responses from single images of acoustic environments

Nikhil Singh 48 Nov 27, 2022
Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER (WIP) Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEER is an e

Alipay 6 Dec 17, 2022
ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure

ViViT is a collection of numerical tricks to efficiently access curvature from the generalized Gauss-Newton (GGN) matrix based on its low-rank structure. Provided functionality includes computing

Felix Dangel 12 Dec 08, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021

Pedestron Pedestron is a MMdetection based repository, that focuses on the advancement of research on pedestrian detection. We provide a list of detec

Irtiza Hasan 594 Jan 05, 2023
TEDSummary is a speech summary corpus. It includes TED talks subtitle (Document), Title-Detail (Summary), speaker name (Meta info), MP4 URL, and utterance id

TEDSummary is a speech summary corpus. It includes TED talks subtitle (Document), Title-Detail (Summary), speaker name (Meta info), MP4 URL

3 Dec 26, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily

GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily Abstract Graph Neural Networks (GNNs) are widely used on a

10 Dec 20, 2022
Predicting the duration of arrival delays for commercial flights.

Flight Delay Prediction Our objective is to predict arrival delays of commercial flights. According to the US Department of Transportation, about 21%

Jordan Silke 1 Jan 11, 2022
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
A High-Performance Distributed Library for Large-Scale Bundle Adjustment

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment This repo contains an official implementation of MegBA. MegBA is a

旷视研究院 3D 组 336 Dec 27, 2022
AdamW optimizer for bfloat16 models in pytorch.

Image source AdamW optimizer for bfloat16 models in pytorch. Bfloat16 is currently an optimal tradeoff between range and relative error for deep netwo

Alex Rogozhnikov 8 Nov 20, 2022
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

149 Dec 15, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
A collection of resources and papers on Diffusion Models, a darkhorse in the field of Generative Models

This repository contains a collection of resources and papers on Diffusion Models and Score-based Models. If there are any missing valuable resources

5.1k Jan 08, 2023
QHack—the quantum machine learning hackathon

Official repo for QHack—the quantum machine learning hackathon

Xanadu 72 Dec 21, 2022
A check for whether the dependency jobs are all green.

alls-green A check for whether the dependency jobs are all green. Why? Do you have more than one job in your GitHub Actions CI/CD workflows setup? Do

Re:actors 33 Jan 03, 2023
Official PyTorch implementation for paper "Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer"

UPT: Unary–Pairwise Transformers This repository contains the official PyTorch implementation for the paper Frederic Z. Zhang, Dylan Campbell and Step

Frederic Zhang 109 Dec 20, 2022
source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge" Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahu

Abhin Shah 1 Jun 03, 2022