ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

Overview

PWC PWC

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

This repository contains implementation of the monocular/multi-view 3D object detector ImVoxelNet, introduced in our paper:

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2106.01178

drawing

Installation

For convenience, we provide a Dockerfile. Alternatively, you can install all required packages manually.

This implementation is based on mmdetection3d framework. Please refer to the original installation guide install.md. Also, rotated_iou should be installed.

Most of the ImVoxelNet-related code locates in the following files: detectors/imvoxelnet.py, necks/imvoxelnet.py, dense_heads/imvoxel_head.py, pipelines/multi_view.py.

Datasets

We support three benchmarks based on the SUN RGB-D dataset.

  • For the VoteNet benchmark with 10 object categories, you should follow the instructions in sunrgbd.
  • For the PerspectiveNet benchmark with 30 object categories, the same instructions can be applied; you only need to pass --dataset sunrgbd_monocular when running create_data.py.
  • The Total3DUnderstanding benchmark implies detecting objects of 37 categories along with camera pose and room layout estimation. Download the preprocessed data as train.json and val.json and put it to ./data/sunrgbd. Then run:
    python tools/data_converter/sunrgbd_total.py

ScanNet. Please follow instructions in scannet. Note that create_data.py works with point clouds, not RGB images; thus, you should do some preprocessing before running create_data.py.

  1. First, you should obtain RGB images. We recommend using a script from SensReader.
  2. Then, put the camera poses and JPG images in the folder with other ScanNet data:
scannet
├── sens_reader
│   ├── scans
│   │   ├── scene0000_00
│   │   │   ├── out
│   │   │   │   ├── frame-000001.color.jpg
│   │   │   │   ├── frame-000001.pose.txt
│   │   │   │   ├── frame-000002.color.jpg
│   │   │   │   ├── ....
│   │   ├── ...

Now, you may run create_data.py with --dataset scannet_monocular.

For KITTI and nuScenes, please follow instructions in getting_started.md. For nuScenes, set --dataset nuscenes_monocular.

Getting Started

Please see getting_started.md for basic usage examples.

Training

To start training, run dist_train with ImVoxelNet configs:

bash tools/dist_train.sh configs/imvoxelnet/imvoxelnet_kitti.py 8

Testing

Test pre-trained model using dist_test with ImVoxelNet configs:

bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth 8 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.15 or more:

python tools/test.py configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth --show

Models

Dataset Object Classes Download Link Log
SUN RGB-D 37 from Total3dUnderstanding total_sunrgbd.pth total_sunrgbd.log
SUN RGB-D 30 from PerspectiveNet perspective_sunrgbd.pth perspective_sunrgbd.log
SUN RGB-D 10 from VoteNet sunrgbd.pth sunrgbd.log
ScanNet 18 from VoteNet scannet.pth scannet.log
KITTI Car kitti.pth kitti.log
nuScenes Car nuscenes.pth nuscenes.log

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@article{rukhovich2021imvoxelnet,
  title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection},
  author={Danila Rukhovich, Anna Vorontsova, Anton Konushin},
  journal={arXiv preprint arXiv:2106.01178},
  year={2021}
}
Comments
  • Why do I need to download and use KITTI velodyne data?

    Why do I need to download and use KITTI velodyne data?

    Hello @filaPro

    I was reading your paper and trying to implement your method on an RGB dataset that I have collected.

    While trying to test your code, it looks like you also need KITTI Velodyne data to be downloaded. Does your method use lidar point cloud or else you are using point cloud dataset for some other purpose.

    Thank you for sharing the code and your help.

    bug 
    opened by chetanmreddy 13
  • Question about MMCV

    Question about MMCV

    When I installed the mmcv-full=1.3.8, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 26, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.8 is used but incompatible. Please install mmcv>=1.1.5, <=1.3.0. When I install the mmcv-full=1.3.1, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 3, in <module> import mmdet File "/home/CN/zizhang.wu/anaconda3/envs/imvoxelnet03/lib/python3.7/site-packages/mmdet/__init__.py", line 25, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.1 is used but incompatible. Please install mmcv>=1.3.8, <=1.4.0.

    opened by rockywind 9
  • General Questions - ImVoxelNet with custom dataset

    General Questions - ImVoxelNet with custom dataset

    Hello,

    Thank you for your work. I have a few questions I wish to have clarified. Context: I am creating a dataset in SUN-RGBD format, and so I would like to understand the format structure.

    1. Looks like the "calib" file (once you run the matlab files in SUN-RGBD folder) contains two rows. The first, is the camera extrinsic. However, it is named "Rt" which in my mind should be a 3x4 matrix, but it is stored as a column-major 3x3 matrix. Which coordinates system does this extrinsic parameter transform? From what I understand it rotates from depth coordinate system to camera coordinate system. Then in the ground truth labeling the translation and yaw angle will take care of the bounding box position and orientation. Is this understanding correct?

    2. In MMDetection3D there is a "browse_dataset" file that allows you to view your ground truths of your dataset to confirm it is correct before training. I was wondering if there is one for the SUN-RGBD in ImVoxelNet, as it would be helpful to see if my custom labels in SUN-RGBD format is correct.

    3. I am trying to use the Dockerfile provided, however my machine runs CUDA version 11.1 (RTX 3090 so from my understanding i cannot downgrade to 10.1), which means pytorch>=1.8.0. I change the mmcv-full and mmdet to compatible, most recent versions, but i run into Runtime error "... is not complied with GPU support". Any suggestions here to make the Dockerfile compatible with cuda 11.1? (Running with provided dockerfile gives "CUDA error: no kernel image is available for execution on the device")

    Again, thank you for your time!

    opened by Steven-m2ai 8
  • Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    python tools/test.py configs/imvoxelnet/imvoxelnet_sunrgbd.py work_dirs/epoch_12.pth --show --show-dir work_dirs/imvoxelnet_sunrgbd/results

    The output is an original image.

    opened by lihua213 8
  • about create_data.py script?

    about create_data.py script?

    Thanks for sharing the codes. I met the error, but I never modified anything. The script is : python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti The result is: Traceback (most recent call last): File "tools/create_data.py", line 4, in <module> from tools.data_converter import indoor_converter as indoor ModuleNotFoundError: No module named 'tools.data_converter'

    opened by rockywind 8
  • 0 Loss and AP

    0 Loss and AP

    Hi. Thank you for your great work! I have successfully trained and tested your model using KITTI dataset and it works.

    I currently trying to train a custom dataset (which is not car), but somehow the loss and AP is 0 image I have correctly set the image size and the dataset was already validated and it is good. do you have any suggestion? I might miss some config.

    opened by alfinnurhalim 7
  • transformation in 'create_nuscenes_monocular_infos'

    transformation in 'create_nuscenes_monocular_infos'

    opened by Jiayi719 7
  • met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    Hi, when i run test.py for scannet dataset, i met one error. please help, thanks very much.

    [email protected]:/mmdetection3d# python tools/test.py configs/imvoxelnet/imvoxelnet_scannet.py ./data/checkpoints/scannet.pth --show --show-dir ./data/scannet/show-dir/ Use load_from_local loader [ ] 0/6, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/test.py", line 153, in main() File "tools/test.py", line 129, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) File "/mmdetection3d/mmdet3d/apis/test.py", line 27, in single_gpu_test for i, data in enumerate(data_loader): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg)

    Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 292, in getitem return self.prepare_test_data(idx) File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 166, in prepare_test_data input_dict = self.get_data_info(index) File "/mmdetection3d/mmdet3d/datasets/scannet_monocular_dataset.py", line 19, in get_data_info for i in range(len(info['image_paths'])): KeyError: 'image_paths'

    bug 
    opened by jasmine202106 7
  • About the train/val splits for SUN RGB-D dataset

    About the train/val splits for SUN RGB-D dataset

    Hello, Thanks for your excellent work! I noticed that you have processed the annotation for SUN RGB-D to coco format, could you please tell me your data processing method and the basis of splits. I have generated the visiualzation for val part, but I cannnot find the samples showed in the paper of Total3D, Is it because you divided the data set differently?

    Best, Harvey

    opened by Harvey-Mei 6
  • subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    run

    $ bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py work_dirs/20210503_214214.pth 1 --eval mAP
    

    and report:

    Traceback (most recent call last):
      File "tools/test.py", line 9, in <module>
        from mmdet3d.apis import single_gpu_test
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/__init__.py", line 1, in <module>
        from .inference import inference_detector, init_detector, show_result_meshlab
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/inference.py", line 8, in <module>
        from mmdet3d.core import Box3DMode, show_result
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/__init__.py", line 2, in <module>
        from .bbox import *  # noqa: F401, F403
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/__init__.py", line 4, in <module>
        from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/__init__.py", line 1, in <module>
        from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 5, in <module>
        from ..structures import get_box_type
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/__init__.py", line 1, in <module>
        from .base_box3d import BaseInstance3DBoxes
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in <module>
        from mmdet3d.ops.iou3d import iou3d_cuda
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/__init__.py", line 5, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py", line 1, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
        from . import ball_query_ext
    ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py)
    Traceback (most recent call last):
      File "/home/xxx/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/xxx/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.
    
    opened by Light-- 6
  • How to make imVoxelNet support multi-classes in nuScenes dataset?

    How to make imVoxelNet support multi-classes in nuScenes dataset?

    Hi. @filaPro Thanks for sharing the code. I noticed that your original paper only mentioned results of "car" in nuScenes. I want to see how it performs under multi-classes. I modified this line to make the network output support 10 classes. https://github.com/saic-vul/imvoxelnet/blob/3512e89ca98e48aebb21a4c9e9fbe5037220b3a4/configs/imvoxelnet/imvoxelnet_nuscenes.py#L26

    I modified it to num_classes=10, But still I only get results for single class "car". The other classes are all 0 for mAP. Did you tired this before? Can you help me?

    opened by XinchaoGou 6
  • Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Hi, I have question about the installation, you mentioned "replacing open-mmlab/mmdetection3d with saic-vul/imvoxelnet", what does this mean? Should we first install mmdetection3d? Or just install saic-vul/imvoxelnet?

    Thanks!

    opened by gyhandy 5
  • Some problems about val

    Some problems about val

    hello, I find some problems about 'val', I found that there was a problem with the code in line 144 'val_dataset.pipeline = cfg.data.train.pipeline', and it needs to be changed to this val_dataset.pipeline = cfg.data.train.dataset.pipeline. right?

    opened by wuwangzhuanwan 3
  • How can I train on a dataset without lidar2cam matrix?

    How can I train on a dataset without lidar2cam matrix?

    Hi filaPro, Thank you for your brilliant work in 3D detection. I'm trying to train imvoxelnet on my own dataset which only has world2cam matrix, but no lidar2cam matrix compared with kitti dataset. Is lidar2cam matrix is necessary for training? If so, can I train with world2cam matrix? Thank you!

    opened by Italian-PAO 3
  • Voxel Size

    Voxel Size

    Hello,

    I am experimenting with a custom dataset on ImVoxelNet. My dataset is ~2000 images, and i am running into extreme overfitting issues. For example the prediction on the validation image is in the same pattern as some of the training image predictions.

    I was looking through what could be the case, I guess I could try playing with the lr and scheduler. however, I was also looking into voxel size and number. Do you think this could have any affect on the outcome? Any other advice? Thanks!

    opened by Steven-m2ai 8
  • How to use outputs of layout / angles from a pretrained model?

    How to use outputs of layout / angles from a pretrained model?

    I'm playing with the SUN RGB-D model for (v3 | [email protected]: 43.7 which uses 20211007_105247.pth and imvoxelnet_total_sunrgbd_fast.py).

    For each image I'm testing, I have the RGB and a 3x3 intrinsic matrix only which goes from camera space to screen.

    I've been able to follow the demo code in general so far! Perhaps I'm missing it, but the flow and pipeline for the available demo's appear to not use the outputs for layout / angles? However, the visualized images elsewhere seem to have layout or room tilt predictions applied along with the per-object yaw angles.

    I want to make sure that I'm using the SUN RBG-D model correctly. Are there any examples I can follow to make sure I can apply the room tilts to the objects? E.g., say if my end goal is a 8 vertex mesh per object that is in camera coordinates?

    For instance, show_result and _write_oriented_bbox seem to only use the yaw angle. It seems like those are the two main functions for visualizing (unless I'm missing some code).

    To be clear, the predictions are definitely being made as expected. It's only the exact steps for applying them that are ambiguous to me

    opened by garrickbrazil 5
Releases(v1.2)
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
Dynamic Bottleneck for Robust Self-Supervised Exploration

Dynamic Bottleneck Introduction This is a TensorFlow based implementation for our paper on "Dynamic Bottleneck for Robust Self-Supervised Exploration"

Bai Chenjia 4 Nov 14, 2022
Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.

Generative Models Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow. Also present here are RBM and Helmholtz Machine. Note: Gen

Agustinus Kristiadi 7k Jan 02, 2023
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
Distilled coarse part of LoFTR adapted for compatibility with TensorRT and embedded divices

Coarse LoFTR TRT Google Colab demo notebook This project provides a deep learning model for the Local Feature Matching for two images that can be used

Kirill 46 Dec 24, 2022
Pytorch implementation of SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation

SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation Efficient Self-Ensemble Framework for Semantic Segmentation by Walid Bousselham

61 Dec 26, 2022
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 01, 2022
A list of Machine Learning Art Colabs

ML Visual Art Colabs A list of cool Colabs on Machine Learning Imagemaking or other artistic purposes 3D Ken Burns Effect Ken Burns Effect by Manuel R

Derrick Schultz (he/him) 789 Dec 12, 2022
CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network)

CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network) This is PneumoniaDiagnose, an artificially intellig

Azhaan 2 Jan 03, 2022
Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

UPMT Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On See main.py as an example: from model import PopM

7 Sep 01, 2022
AWS provides a Python SDK, "Boto3" ,which can be used to access the AWS-account from the local.

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

Shreyas Srivastava 1 Oct 25, 2021
A style-based Quantum Generative Adversarial Network

Style-qGAN A style based Quantum Generative Adversarial Network (style-qGAN) model for Monte Carlo event generation. Tutorial We have prepared a noteb

9 Nov 24, 2022
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
DrQ-v2: Improved Data-Augmented Reinforcement Learning

DrQ-v2: Improved Data-Augmented RL Agent Method DrQ-v2 is a model-free off-policy algorithm for image-based continuous control. DrQ-v2 builds on DrQ,

Facebook Research 234 Jan 01, 2023
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Code for testing various M1 Chip benchmarks with TensorFlow.

M1, M1 Pro, M1 Max Machine Learning Speed Test Comparison This repo contains some sample code to benchmark the new M1 MacBooks (M1 Pro and M1 Max) aga

Daniel Bourke 348 Jan 04, 2023
PyTorch Implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedding (ORAL, MICCAIW 2021)

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding PyTorch implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedd

22 Oct 21, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
Distributional Sliced-Wasserstein distance code

Distributional Sliced Wasserstein distance This is a pytorch implementation of the paper "Distributional Sliced-Wasserstein and Applications to Genera

VinAI Research 39 Jan 01, 2023
Checking fibonacci - Generating the Fibonacci sequence is a classic recursive problem

Fibonaaci Series Generating the Fibonacci sequence is a classic recursive proble

Moureen Caroline O 1 Feb 15, 2022