Official Implementation of SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations

Related tags

Deep LearningSimIPU
Overview

Official Implementation of SimIPU

  • SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations
  • Since the code is still waiting for release, if you have any question with reproduction, feel free to contact us. We will try our best to help you.
  • Currently, the core code of SimIPU is implemented in the commercial project. We are trying our best to make the code publicly available.
Comments
  • Question about augmentation

    Question about augmentation

    Hi, I'm a little confused about the data augmentation.

    1. How did you set img_aug when img_moco=True? It seems that we need an 'img_pipeline' in 'simipu_kitti.py', right?
    2. For 3D augmentation, it seems that it is done in this line. So the 3D augmentation is done based on the point features instead the raw points, right? If I want to try moco=True, how to set 3D augmentation? should I do this in the dataset building part? https://github.com/zhyever/SimIPU/blob/5b346e392c161a5e9fdde09b1692656bc7cd3faf/project_cl/decorator/inter_intro_decorator_moco_better.py#L394

    Looking forward to your reply. Many thanks.

    opened by sunnyHelen 2
  • error for env setup:ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query'

    error for env setup:ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query'

    Thanks for your insightful paper and clear code repo!

    Hi, I met with the ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' when run the command bash tools/dist_train.sh project_cl/configs/simipu/simipu_kitti.py 1 --work_dir ./

    Do you know how to solve it?

    Traceback (most recent call last): File "tools/train.py", line 16, in from mmdet3d.apis import train_model File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/apis/init.py", line 1, in from .inference import (convert_SyncBN, inference_detector, File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/apis/inference.py", line 10, in from mmdet3d.core import (Box3DMode, DepthInstance3DBoxes, File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/core/init.py", line 2, in from .bbox import * # noqa: F401, F403 File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/core/bbox/init.py", line 4, in from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/core/bbox/iou_calculators/init.py", line 1, in from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 5, in from ..structures import get_box_type File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/core/bbox/structures/init.py", line 1, in from .base_box3d import BaseInstance3DBoxes File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in from mmdet3d.ops.iou3d import iou3d_cuda File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/ops/init.py", line 5, in from .ball_query import ball_query File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/ops/ball_query/init.py", line 1, in from .ball_query import ball_query File "/mnt/lustre/xxh/SimIPU-main/mmdet3d/ops/ball_query/ball_query.py", line 4, in from . import ball_query_ext ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (/mnt/lustre/xxh/SimIPU-main/mmdet3d/ops/ball_query/init.py)

    I noticed that you once met with the same error. https://github.com/open-mmlab/mmdetection3d/issues/503#issuecomment-847618114

    So, I would like to ask for your help~ Hopefully you have a good solution. :)

    opened by JerryX1110 2
  • A question about eq5 and eq6

    A question about eq5 and eq6

    Thanks for your inspiring work. I have some wonder about eq5 and eq6. As far as I know, After eq5, f should be a tensor which is a global feature with shape (batchsize * 2048 * 1 * 1), how can you sample corresponding image features by projection location? After all, there's no spatial information in f anymore. Or maybe you got features from a previous layer of ResNet? Looking forward to your reply.

    opened by lianchengmingjue 2
  • A question about Tab.5 in Ablation Study

    A question about Tab.5 in Ablation Study

    Thanks for your excellent work first! I have a question about Tab.5 in Ablation Study. Why "Scratch" equals "SimIPU w/o inter-module ", which means that the intra-module is useless?

    opened by Trent-tangtao 1
  • Have you tried not to crop gradient of f^{\alpha} in eq7?

    Have you tried not to crop gradient of f^{\alpha} in eq7?

    Hi, I like your good work! I am wondering have you tried not to crop the gradient of $f^{\alpha}$ in eq7? If you crop the gradient, it seems like the pertaining of the point branch cannot learn anything from the image branch.

    opened by Hiusam 1
  • issues about create_data

    issues about create_data

    Hi, thanks for sharing your great work. I encounter some issues during creating data by running create_data.py First create reduced point cloud for training set [ ] 0/3712, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/create_data.py", line 247, in
    out_dir=args.out_dir)
    File "tools/create_data.py", line 24, in kitti_data_prep
    kitti.create_reduced_point_cloud(root_path, info_prefix)
    File "/mnt/lustre/chenzhuo1/hzha/SimIPU/tools/data_converter/kitti_converter.py", line 374, in create_reduced_point_cloud
    _create_reduced_point_cloud(data_path, train_info_path, save_path)
    File "/mnt/lustre/chenzhuo1/hzha/SimIPU/tools/data_converter/kitti_converter.py", line 314, in _create_reduced_point_cloud
    count=-1).reshape([-1, num_features])
    ValueError: cannot reshape array of size 461536 into shape (6)

    It seems to set the num_features=4 and front_camera_id=2? in this line: https://github.com/zhyever/SimIPU/blob/5b346e392c161a5e9fdde09b1692656bc7cd3faf/tools/data_converter/kitti_converter.py#L291

    I assume doing this can solve the problem but encounter another problem when Create GT Database of KittiDataset
    [ ] 0/3712, elapsed: 0s, ETA:Traceback (most recent call last):
    File "tools/create_data.py", line 247, in
    out_dir=args.out_dir)
    File "tools/create_data.py", line 44, in kitti_data_prep
    with_bbox=True) # for moca
    File "/mnt/lustre/chenzhuo1/hzha/SimIPU/tools/data_converter/create_gt_database.py", line 275, in create_groundtruth_database
    P0 = np.array(example['P0']).reshape(4, 4)
    KeyError: 'P0'

    Can you help me figure out how to solve these issues?

    opened by sunnyHelen 21
Owner
Zhyever
Keep going.
Zhyever
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Onur Çopur 12 Nov 11, 2022
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Jan 04, 2023
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch]

Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch] Abstract Snapshot compressive imaging (SCI) can rec

integirty 6 Nov 01, 2022
K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

KCP The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for p

Yu-Kai Lin 109 Dec 14, 2022
Forecasting Nonverbal Social Signals during Dyadic Interactions with Generative Adversarial Neural Networks

ForecastingNonverbalSignals This is the implementation for the paper Forecasting Nonverbal Social Signals during Dyadic Interactions with Generative A

1 Feb 10, 2022
Multiple style transfer via variational autoencoder

ST-VAE Multiple style transfer via variational autoencoder By Zhi-Song Liu, Vicky Kalogeiton and Marie-Paule Cani This repo only provides simple testi

13 Oct 29, 2022
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

118 Dec 12, 2022
Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

The Hypersim Dataset For many fundamental scene understanding tasks, it is difficult or impossible to obtain per-pixel ground truth labels from real i

Apple 1.3k Jan 04, 2023
BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

47 Dec 06, 2022
Accuracy Aligned. Concise Implementation of Swin Transformer

Accuracy Aligned. Concise Implementation of Swin Transformer This repository contains the implementation of Swin Transformer, and the training codes o

FengWang 77 Dec 16, 2022
[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Feel free to visit my homepage Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper] Presentation

Seokeon Choi 35 Oct 26, 2022
Unofficial pytorch-lightning implement of Mip-NeRF

mipnerf_pl Unofficial pytorch-lightning implement of Mip-NeRF, Here are some results generated by this repository (pre-trained models are provided bel

Jianxin Huang 159 Dec 23, 2022
You can draw the corresponding bounding box into the image and save it according to the result file (txt format) run by the tracker.

You can draw the corresponding bounding box into the image and save it according to the result file (txt format) run by the tracker.

Huiyiqianli 42 Dec 06, 2022
Official code repository for the publication "Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons"

Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons This repository contains the code to repr

Computational Neuroscience, University of Bern 3 Aug 04, 2022
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus

free-duolingo-plus duolingo account creator that uses your invite code to get yo

1 Jan 06, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
LBK 26 Dec 28, 2022
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
Good Semi-Supervised Learning That Requires a Bad GAN

Good Semi-Supervised Learning that Requires a Bad GAN This is the code we used in our paper Good Semi-supervised Learning that Requires a Bad GAN Ziha

Zhilin Yang 177 Dec 12, 2022