Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

Overview

SphereRPN

Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

Authors: Thang Vu, Kookhoi Kim, Haeyong Kang, Xuan Thanh Nguyen, Tung M. Luu, Chang D. Yoo

Installation

Requirements

  • Python 3.7.0
  • Pytorch 1.1.0
  • CUDA 9.0

Virtual Environment

conda create -n pointgroup python==3.7
source activate pointgroup

Install

(1) Clone the repository.

git clone https://github.com/llijiang/PointGroup.git --recursive 
cd PointGroup

(2) Install the dependent libraries.

pip install -r requirements.txt
conda install -c bioconda google-sparsehash 

(3) For the SparseConv, we apply the implementation of spconv. The repository is recursively downloaded at step (1). We use the version 1.0 of spconv.

Note: We further modify spconv\spconv\functional.py to make grad_output contiguous. Make sure you use our modified spconv.

  • To compile spconv, firstly install the dependent libraries.
conda install libboost
conda install -c daleydeng gcc-5 # need gcc-5.4 for sparseconv

Add the $INCLUDE_PATH$ that contains boost in lib/spconv/CMakeLists.txt. (Not necessary if it could be found.)

include_directories($INCLUDE_PATH$)
  • Compile the spconv library.
cd lib/spconv
python setup.py bdist_wheel
  • Run cd dist and use pip to install the generated .whl file.

(4) Compile the pointgroup_ops library.

cd lib/pointgroup_ops
python setup.py develop

If any header files could not be found, run the following commands.

python setup.py build_ext --include-dirs=$INCLUDE_PATH$
python setup.py develop

$INCLUDE_PATH$ is the path to the folder containing the header files that could not be found.

Data Preparation

(1) Download the ScanNet v2 dataset.

(2) Put the data in the corresponding folders.

  • Copy the files [scene_id]_vh_clean_2.ply, [scene_id]_vh_clean_2.labels.ply, [scene_id]_vh_clean_2.0.010000.segs.json and [scene_id].aggregation.json into the dataset/scannetv2/train and dataset/scannetv2/val folders according to the ScanNet v2 train/val split.

  • Copy the files [scene_id]_vh_clean_2.ply into the dataset/scannetv2/test folder according to the ScanNet v2 test split.

  • Put the file scannetv2-labels.combined.tsv in the dataset/scannetv2 folder.

The dataset files are organized as follows.

PointGroup
├── dataset
│   ├── scannetv2
│   │   ├── train
│   │   │   ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│   │   ├── val
│   │   │   ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│   │   ├── test
│   │   │   ├── [scene_id]_vh_clean_2.ply 
│   │   ├── scannetv2-labels.combined.tsv

(3) Generate input files [scene_id]_inst_nostuff.pth for instance segmentation.

cd dataset/scannetv2
python prepare_data_inst.py --data_split train
python prepare_data_inst.py --data_split val
python prepare_data_inst.py --data_split test

Training

CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_run1_scannet.yaml 

You can start a tensorboard session by

tensorboard --logdir=./exp --port=6666

Inference and Evaluation

(1) If you want to evaluate on validation set, prepare the .txt instance ground-truth files as the following.

cd dataset/scannetv2
python prepare_data_inst_gttxt.py

Make sure that you have prepared the [scene_id]_inst_nostuff.pth files before.

(2) Test and evaluate.

a. To evaluate on validation set, set split and eval in the config file as val and True. Then run

CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_run1_scannet.yaml

An alternative evaluation method is to set save_instance as True, and evaluate with the ScanNet official evaluation script.

b. To run on test set, set (split, eval, save_instance) as (test, False, True). Then run

CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_run1_scannet.yaml

c. To test with a pretrained model, run

CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_default_scannet.yaml --pretrain $PATH_TO_PRETRAIN_MODEL$
Owner
Thang Vu
My research involves in Deep Learning for Computer Vision (image enhancement, object detection, segmentation) and other AI related fields.
Thang Vu
The MLOps platform for innovators 🚀

​ DS2.ai is an integrated AI operation solution that supports all stages from custom AI development to deployment. It is an AI-specialized platform service that collects data, builds a training datas

9 Jan 03, 2023
LowRankModels.jl is a julia package for modeling and fitting generalized low rank models.

LowRankModels.jl LowRankModels.jl is a Julia package for modeling and fitting generalized low rank models (GLRMs). GLRMs model a data array by a low r

Madeleine Udell 183 Dec 17, 2022
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Cweqgen - The CW Equation Generator

The CW Equation Generator The cweqgen (pronouced like "Queck-Jen") package provi

2 Jan 15, 2022
Controlling the MicriSpotAI robot from scratch

Abstract: The SpotMicroAI project is designed to be a low cost, easily built quadruped robot. The design is roughly based off of Boston Dynamics quadr

Florian Wilk 405 Jan 05, 2023
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling

You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling Transformer-based models are widely used in natural language processi

Zhanpeng Zeng 12 Jan 01, 2023
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP

CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP Andreas Fürst* 1, Elisabeth Rumetshofer* 1, Viet Tran1, Hubert Ramsauer1, Fei Tang3, Joh

Institute for Machine Learning, Johannes Kepler University Linz 133 Jan 04, 2023
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

💕YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 05, 2023
Corruption Invariant Learning for Re-identification

Corruption Invariant Learning for Re-identification The official repository for Benchmarks for Corruption Invariant Person Re-identification (NeurIPS

Minghui Chen 73 Dec 08, 2022
PyTorch implementation for Graph Contrastive Learning with Augmentations

Graph Contrastive Learning with Augmentations PyTorch implementation for Graph Contrastive Learning with Augmentations [poster] [appendix] Yuning You*

Shen Lab at Texas A&M University 382 Dec 15, 2022
Differentiable Quantum Chemistry (only Differentiable Density Functional Theory and Hartree Fock at the moment)

DQC: Differentiable Quantum Chemistry Differentiable quantum chemistry package. Currently only support differentiable density functional theory (DFT)

75 Dec 02, 2022
Exploring whether attention is necessary for vision transformers

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet Paper/Report TL;DR We replace the attention layer in a v

Luke Melas-Kyriazi 461 Jan 07, 2023
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
使用yolov5训练自己数据集(详细过程)并通过flask部署

使用yolov5训练自己的数据集(详细过程)并通过flask部署 依赖库 torch torchvision numpy opencv-python lxml tqdm flask pillow tensorboard matplotlib pycocotools Windows,请使用 pycoc

HB.com 19 Dec 28, 2022
The source code of "SIDE: Center-based Stereo 3D Detector with Structure-aware Instance Depth Estimation", accepted to WACV 2022.

SIDE: Center-based Stereo 3D Detector with Structure-aware Instance Depth Estimation The source code of our work "SIDE: Center-based Stereo 3D Detecto

10 Dec 18, 2022
The pyrelational package offers a flexible workflow to enable active learning with as little change to the models and datasets as possible

pyrelational is a python active learning library developed by Relation Therapeutics for rapidly implementing active learning pipelines from data management, model development (and Bayesian approximat

Relation Therapeutics 95 Dec 27, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 07, 2022