REGTR: End-to-end Point Cloud Correspondences with Transformers

Related tags

Deep LearningRegTR
Overview

REGTR: End-to-end Point Cloud Correspondences with Transformers

This repository contains the source code for REGTR. REGTR utilizes multiple transformer attention layers to directly predict each downsampled point's corresponding location in the other point cloud. Unlike typical correspondence-based registration algorithms, the predicted correspondences are clean and do not require an additional RANSAC step. This results in a fast, yet accurate registration.

REGTR Network Architecture

If you find this useful, please cite:

@inproceedings{yew2022regtr,
  title={REGTR: End-to-end Point Cloud Correspondences with Transformers},
  author={Yew, Zi Jian and Lee, Gim hee},
  booktitle={CVPR},
  year={2022},
}

Dataset environment

Our model is trained with the following environment:

Other required packages can be installed using pip: pip install -r src/requirements.txt.

Data and Preparation

Follow the following instructions to download each dataset (as necessary). Your folder should then look like this:

.
├── data/
    ├── indoor/
        ├── test/
        |   ├── 7-scenes-redkitchen/
        |   |   ├── cloud_bin_0.info.txt
        |   |   ├── cloud_bin_0.pth
        |   |   ├── ...
        |   ├── ...
        ├── train/
        |   ├── 7-scenes-chess/
        |   |   ├── cloud_bin_0.info.txt
        |   |   ├── cloud_bin_0.pth
        |   |   ├── ...
        ├── test_3DLoMatch_pairs-overlapmask.h5
        ├── test_3DMatch_pairs-overlapmask.h5
        ├── train_pairs-overlapmask.h5
        └── val_pairs-overlapmask.h5
    └── modelnet40_ply_hdf5_2048
        ├── ply_data_test0.h5
        ├── ply_data_test1.h5
        ├── ...
├── src/
└── Readme.md

3DMatch

Download the processed dataset from Predator project site, and place them into ../data.

Then for efficiency, it is recommended to pre-compute the overlapping points (used for computing the overlap loss). You can do this by running the following from the src/ directory:

python data_processing/compute_overlap_3dmatch.py

ModelNet

Download the PointNet-processed dataset from here, and place it into ../data.

Pretrained models

You can download our trained models here. Unzip the files into the trained_models/.

Demo

We provide a simple demo script demo.py that loads our model and checkpoints, registers 2 point clouds, and visualizes the result. Simply download the pretrained models and run the following from the src/ directory:

python demo.py --example 0  # choose from 0 - 4 (see code for details)

Press 'q' to end the visualization and exit. Refer the documentation for visualize_result() for explanation of the visualization.

Inference/Evaluation

The following code in the src/ directory performs evaluation using the pretrained checkpoints as provided above; change the checkpoint paths accordingly if you're using your own trained models. Note that due to non-determinism from the neighborhood computation during our GPU-based KPConv processing, the results will differ slightly (e.g. mean registration recall may differ by around +/- 0.2%) between each run.

3DMatch / 3DLoMatch

This will run inference and compute the evaluation metrics used in Predator (registration success of <20cm).

# 3DMatch
python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

# 3DLoMatch
python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DLoMatch

ModelNet

# ModelNet
python test.py --dev --resume ../trained_models/modelnet/ckpt/model-best.pth --benchmark ModelNet

# ModelLoNet
python test.py --dev --resume ../trained_models/modelnet/ckpt/model-best.pth --benchmark ModelNet

Training

Run the following commands from the src/ directory to train the network.

3DMatch (Takes ~2.5 days on a Titan RTX)

python train.py --config conf/3dmatch.yaml

ModelNet (Takes <2 days on a Titan RTX)

python train.py --config conf/modelnet.yaml

Acknowledgements

We would like to thank the authors for Predator, D3Feat, KPConv, DETR for making their source codes public.

Comments
  • About the influence of the weak data augmentation

    About the influence of the weak data augmentation

    Thanks for the great work. I notice that RegTR adopts a much weaker augmentation than the commonly used augmentation in [1, 2, 3]. How does this affect the convergence of RegTR? And will the weak augmentation affect the robustness to large transformation perturbation? Thank you.

    [1] Bai, X., Luo, Z., Zhou, L., Fu, H., Quan, L., & Tai, C. L. (2020). D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6359-6367). [2] Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., & Schindler, K. (2021). Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (pp. 4267-4276). [3] Yu, H., Li, F., Saleh, M., Busam, B., & Ilic, S. (2021). Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. Advances in Neural Information Processing Systems, 34, 23872-23884.

    opened by qinzheng93 4
  • Training for custom dataset

    Training for custom dataset

    Hi @yewzijian,

    Thanks for sharing your work. I would like to ask you whether you could elaborate with some details about how someone could train the model for a custom dataset.

    Thanks.

    opened by ttsesm 3
  • how to visualize the progress of the training process?

    how to visualize the progress of the training process?

    Is it possible to visualize the progress of the training pipeline described in https://github.com/yewzijian/RegTR#training with tensorboard or another lib?

    opened by ttsesm 2
  • Setting parameter values for training of custom dataset

    Setting parameter values for training of custom dataset

    Hi @yewzijian! Thanks for sharing the codebase for your work. I am trying to train the network on custom data. As I went through the configuration, I found that for feature loss config., I need to set(r_p, r_n) which according to the paper are(m,2m), where m being the "voxel distance used in the final downsampling layer in the KPConv backbone". How do I figure out m for my dataset?

    opened by praffulp 1
  • A CUDA Error

    A CUDA Error

    Dear Yew & other friends: I have run code on (just like in readme): Python 3.8.8 PyTorch 1.9.1 with torchvision 0.10.1 (Cuda 11.1) PyTorch3D 0.6.0 MinkowskiEngine 0.5.4 RTX 3090

        But I got following error:
    
        recent call last):
          File "train.py", line 88, in <module>
            main()
          File "train.py", line 84, in main
            trainer.fit(model, train_loader, val_loader)
          File "/home/***/codes/RegTR-main/src/trainer.py", line 119, in fit
            losses['total'].backward()
          File "/home/***/enter/envs/regtr/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
            torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
          File "/home/***/enter/envs/regtr/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
            Variable._execution_engine.run_backward(
        RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
        
    
    
        I have already tried to set os.environ['CUDA_LAUNCH_BLOCKING'] = '1', but it did not work.
    
    opened by Fzuerzmj 1
  • Is it possible to remove Minkowski Engine?

    Is it possible to remove Minkowski Engine?

    The last release date of minkowski is in May 2021. The dependencies of it might not be easily met with new software and hardware. I found it impossible to make RegTR train on my machine because of a cuda memory problem to which I found no solution. Without Minkowski, I would have more freedom when choosing the versions of pytorch and everything, so that I cound have more chance to solve this problem. I am a slam/c++ veteran and deep learning/python newbie(starting learning deep learning 2 weeks ago), so its hard for me to modify it myself for now. I was wondering if you could be so kind to release a version of RegTR without Minkowski.

    opened by JaySlamer 1
  • Train BUG, please help me

    Train BUG, please help me

    When I execute the following command: python train.py --config conf/modelnet.yaml I got a Bug:

    
    Traceback (most recent call last):
      File "train.py", line 85, in <module>
        main()
      File "train.py", line 81, in main
        trainer.fit(model, train_loader, val_loader)
      File "/home/zsy/Code/RegTR-main/src/trainer.py", line 79, in fit
        self._run_validation(model, val_loader, step=global_step,
      File "/home/zsy/Code/RegTR-main/src/trainer.py", line 249, in _run_validation
        val_out = model.validation_step(val_batch, val_batch_idx)
      File "/home/zsy/Code/RegTR-main/src/models/generic_reg_model.py", line 83, in validation_step
        pred = self.forward(batch)
      File "/home/zsy/Code/RegTR-main/src/models/regtr.py", line 117, in forward
        kpconv_meta = self.preprocessor(batch['src_xyz'] + batch['tgt_xyz'])
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/zsy/Code/RegTR-main/src/models/backbone_kpconv/kpconv.py", line 489, in forward
        pool_p, pool_b = batch_grid_subsampling_kpconv_gpu(
      File "/home/zsy/Code/RegTR-main/src/models/backbone_kpconv/kpconv.py", line 232, in batch_grid_subsampling_kpconv_gpu
        sparse_tensor = ME.SparseTensor(
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/MinkowskiSparseTensor.py", line 275, in __init__
        coordinates, features, coordinate_map_key = self.initialize_coordinates(
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/MinkowskiSparseTensor.py", line 338, in initialize_coordinates
        features = spmm_avg.apply(self.inverse_mapping, cols, size, features)
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/sparse_matrix_functions.py", line 183, in forward
        result, COO, vals = spmm_average(
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/sparse_matrix_functions.py", line 93, in spmm_average
        result, COO, vals = MEB.coo_spmm_average_int32(
    RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /tmp/pip-req-build-h0w4jzhp/src/spmm.cu:591
    

    My environment is configured as required. I think the problem might be with the code below:

            features=points,
            coordinates=coord_batched,
            quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE
        )
    

    I can't solve it , please help me, thx

    opened by immensitySea 3
  • How to get the image result of  Visualization of attention?

    How to get the image result of Visualization of attention?

    Hi, Zi Jian:

    Thanks for sharing so nice work. Could you mind sharing the method to reproduce your results for Visualization of attention? Just as presented by Fig5. and Fig6. from your paper?

    opened by ZJU-PLP 2
  • A sparse tensor bug

    A sparse tensor bug

    ubuntu18.04 RTX3090 cuda11.1 MinkowskiEngine 0.5.4

    The following error occurred when I tried to run your model。

    (RegTR) ➜ src git:(main) ✗ python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

    /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24. warnings.warn( /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") 04/23 20:06:22 [INFO] root - Output and logs will be saved to ../logdev 04/23 20:06:22 [INFO] cvhelpers.misc - Command: test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch 04/23 20:06:22 [INFO] cvhelpers.misc - Source is from Commit 64e5b3f0 (2022-03-28): Fixed minor typo in Readme.md and demo.py 04/23 20:06:22 [INFO] cvhelpers.misc - Arguments: benchmark: 3DMatch, config: None, logdir: ../logs, dev: True, name: None, num_workers: 0, resume: ../trained_models/3dmatch/ckpt/model-best.pth 04/23 20:06:22 [INFO] root - Using config file from checkpoint directory: ../trained_models/3dmatch/config.yaml 04/23 20:06:22 [INFO] data_loaders.threedmatch - Loading data from ../data/indoor 04/23 20:06:22 [INFO] RegTR - Instantiating model RegTR 04/23 20:06:22 [INFO] RegTR - Loss weighting: {'overlap_5': 1.0, 'feature_5': 0.1, 'corr_5': 1.0, 'feature_un': 0.0} 04/23 20:06:22 [INFO] RegTR - Config: d_embed:256, nheads:8, pre_norm:True, use_pos_emb:True, sa_val_has_pos_emb:True, ca_val_has_pos_emb:True 04/23 20:06:25 [INFO] CheckPointManager - Loaded models from ../trained_models/3dmatch/ckpt/model-best.pth 0%| | 0/1623 [00:00<?, ?it/s] ** On entry to cusparseSpMM_bufferSize() parameter number 1 (handle) had an illegal value: bad initialization or already destroyed

    Traceback (most recent call last): File "test.py", line 75, in main() File "test.py", line 71, in main trainer.test(model, test_loader) File "/home/lileixin/work/Point_Registration/RegTR/src/trainer.py", line 204, in test test_out = model.test_step(test_batch, test_batch_idx) File "/home/lileixin/work/Point_Registration/RegTR/src/models/generic_reg_model.py", line 132, in test_step pred = self.forward(batch) File "/home/lileixin/work/Point_Registration/RegTR/src/models/regtr.py", line 117, in forward kpconv_meta = self.preprocessor(batch['src_xyz'] + batch['tgt_xyz']) File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/lileixin/work/Point_Registration/RegTR/src/models/backbone_kpconv/kpconv.py", line 489, in forward pool_p, pool_b = batch_grid_subsampling_kpconv_gpu( File "/home/lileixin/work/Point_Registration/RegTR/src/models/backbone_kpconv/kpconv.py", line 232, in batch_grid_subsampling_kpconv_gpu sparse_tensor = ME.SparseTensor( File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/MinkowskiSparseTensor.py", line 275, in init coordinates, features, coordinate_map_key = self.initialize_coordinates( File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/MinkowskiSparseTensor.py", line 338, in initialize_coordinates features = spmm_avg.apply(self.inverse_mapping, cols, size, features) File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/sparse_matrix_functions.py", line 183, in forward result, COO, vals = spmm_average( File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/sparse_matrix_functions.py", line 93, in spmm_average result, COO, vals = MEB.coo_spmm_average_int32( RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /home/lileixin/MinkowskiEngine/src/spmm.cu:590 (RegTR) ➜ src git:(main) ✗ python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

    /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24. warnings.warn( /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.")

    But when I cross out this line of code, the program can run. sparse_tensor = ME.SparseTensor( features=points, coordinates=coord_batched, #quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE )

    opened by caijillx 10
Releases(v1)
Owner
Zi Jian Yew
PhD candidate at National University of Singapore
Zi Jian Yew
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

4.2k Jan 01, 2023
Retrieval.pytorch - The code we used in [2020 DIGIX]

Retrieval.pytorch - The code we used in [2020 DIGIX]

Guo-Hua Wang 2 Feb 07, 2022
[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

DARTS-PT Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xi

Ruochen Wang 86 Dec 27, 2022
This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark

SILG This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark. If you find this work helpful, please cons

Victor Zhong 17 Nov 27, 2022
Localizing Visual Sounds the Hard Way

Localizing-Visual-Sounds-the-Hard-Way Code and Dataset for "Localizing Visual Sounds the Hard Way". The repo contains code and our pre-trained model.

Honglie Chen 58 Dec 07, 2022
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"

EgoNet Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation". This repo inclu

Shichao Li 138 Dec 09, 2022
This repository implements WGAN_GP.

Image_WGAN_GP This repository implements WGAN_GP. Image_WGAN_GP This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you ca

Lieon 6 Dec 10, 2021
Byzantine-robust decentralized learning via self-centered clipping

Byzantine-robust decentralized learning via self-centered clipping In this paper, we study the challenging task of Byzantine-robust decentralized trai

EPFL Machine Learning and Optimization Laboratory 4 Aug 27, 2022
Residual Pathway Priors for Soft Equivariance Constraints

Residual Pathway Priors for Soft Equivariance Constraints This repo contains the implementation and the experiments for the paper Residual Pathway Pri

Marc Finzi 13 Oct 12, 2022
Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Rithesh Kumar 16 Oct 06, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma This repo provi

Jingtao Zhan 99 Dec 27, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 04, 2023
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)

Length-Adaptive Transformer This is the official Pytorch implementation of Length-Adaptive Transformer. For detailed information about the method, ple

Clova AI Research 93 Dec 28, 2022
Jupyter notebooks for using & learning Keras

deep-learning-with-keras-notebooks 這個github的repository主要是個人在學習Keras的一些記錄及練習。希望在學習過程中發現到一些好的資訊與範例也可以對想要學習使用 Keras來解決問題的同好,或是對深度學習有興趣的在學學生可以有一些方便理解與上手範例

ErhWen Kuo 2.1k Dec 27, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

CLIP-Guided-Diffusion Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab. Original colab notebooks by Ka

Nerdy Rodent 336 Dec 09, 2022
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

87 Nov 29, 2022
A smart Chat bot that can help to know about corona virus and Make prediction of corona using X-ray.

TRINIT_Hum_kuchh_nahi_karenge_ML01 Document Link https://github.com/Jatin-Goyal-552/TRINIT_Hum_kuchh_nahi_karenge_ML01/blob/main/hum_kuchh_nahi_kareng

JatinGoyal 1 Feb 03, 2022