Repo for "Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions" https://arxiv.org/abs/2201.12296

Overview

Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions

PWC

This repo contains the dataset and code for the paper Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions by Jiachen Sun et al. This codebase is based on SimpleView, and we thank the authors for their great contributions.

ModelNet40-C

image

image

More visualizations can be found here.

Download ModelNet40-C from Google Drive.

Download ModelNet40-C using our provided script.

Download ModelNet40-C from Zenodo.

ModelNet40-C Leaderboard

Architecture+Data Augmentation Leaderboard

Architecture Data Augmentation Corruption Error Rate (%) Clean Error Rate (%) Checkpoint
PCT PointCutMix-R 16.3 7.2 checkpoint
PCT PointCutMix-K 16.5 6.9 checkpoint
DGCNN PointCutMix-R 17.3 6.8 checkpoint
PCT RSMix 17.3 6.9 checkpoint
DGCNN PointCutMix-K 17.3 7.4 checkpoint
RSCNN PointCutMix-R 17.9 7.6 checkpoint
DGCNN RSMix 18.1 7.1 checkpoint
PCT PGD Adv Train 18.4 8.9 checkpoint
PointNet++ PointCutMix-R 19.1 7.1 checkpoint
PointNet++ PointMixup 19.3 7.1 checkpoint
PCT PointMixup 19.5 7.4 checkpoint
SimpleView PointCutMix-R 19.7 7.9 checkpoint
RSCNN PointMixup 19.8 7.2 checkpoint
PointNet++ PointCutMix-K 20.2 6.7 checkpoint

We allow users to directly download all pre-trained models with every data augmentation method here.

Architecture Leaderboard

Architecture Corruption Error Rate (%) Clean Error Rate (%) Checkpoint
CurveNet 22.7 6.6 checkpoint
PointNet++ 23.6 7.0 checkpoint
PCT 25.5 7.1 checkpoint
GDANet 25.6 7.5 checkpoint
DGCNN 25.9 7.4 checkpoint
RSCNN 26.2 7.7 checkpoint
SimpleView 27.2 6.1 checkpoint
PointNet 28.3 9.3 checkpoint
PointMLP 31.9 6.3 checkpoint
PointMLP-Elite 32.4 7.2 checkpoint

More models' results coming soon ......

We allow users to directly download all pre-trained models with standard training here.

Getting Started

First clone the repository. We would refer to the directory containing the code as ModelNet40-C.

git clone --recurse-submodules [email protected]:jiachens/ModelNet40-C.git

Requirements

The code is tested on Linux OS with Python version 3.7.5, CUDA version 10.0, CuDNN version 7.6 and GCC version 5.4. We recommend using these versions especially for installing pointnet++ custom CUDA modules.

[02-23-2022] The updated codes are tested on Python version 3.7.5, CUDA version 11.4, CuDNN version 8.2 and GCC version 7.5 with the latest torch and torchvision libs, but we still suggest the original setup in case of any unstable bugs.

Install Libraries

We recommend you first install Anaconda and create a virtual environment.

conda create --name modelnetc python=3.7.5

Activate the virtual environment and install the libraries. Make sure you are in ModelNet40-C.

conda activate modelnetc
pip install -r requirements.txt
conda install sed  # for downloading data and pretrained models

For PointNet++, we need to install custom CUDA modules. Make sure you have access to a GPU during this step. You might need to set the appropriate TORCH_CUDA_ARCH_LIST environment variable depending on your GPU model. The following command should work for most cases export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.5". However, if the install fails, check if TORCH_CUDA_ARCH_LIST is correctly set. More details could be found here.

Third-party modules pointnet2_pyt, PCT_Pytorch, emd, and PyGeM can be installed by the following script.

./setup.sh

Download Datasets Including ModelNet40-C and Pre-trained Models

Make sure you are in ModelNet40-C. download.sh script can be used for downloading all the data and the pretrained models. It also places them at the correct locations.

To download ModelNet40 execute the following command. This will download the ModelNet40 point cloud dataset released with pointnet++ as well as the validation splits used in our work.

./download.sh modelnet40

To generate the ModelNet40-C dataset, please run:

python data/process.py
python data/generate_c.py

NOTE that the generation needs a monitor connected since Open3D library does not support background rendering.

We also allow users to download ModelNet40-C directly. Please fill this Google form while downloading our dataset.

./download.sh modelnet40_c

To download the pretrained models with standard training recipe, execute the following command.

./download.sh cor_exp

To download the pretrained models using different data augmentation strategies, execute the following command.

./download.sh runs

New Features

[02-23-2022]

  • We include PointMLP-Elite and GDANet in our benchmark

[02-18-2022]

  • We include CurveNet and PointMLP in our benchmark

[01-28-2022]

  • We include Point Cloud Transformer (PCT) in our benchmark
  • ModelNet40-C/configs contains config files to enable different data augmentations and test-time adaptation methods
  • ModelNet40-C/aug_utils.py contains the data augmentation codes in our paper
  • ModelNet40-C/third_party contains the test-time adaptation used in our paper

Code Organization In Originial SimpleView

  • ModelNet40-C/models: Code for various models in PyTorch.
  • ModelNet40-C/configs: Configuration files for various models.
  • ModelNet40-C/main.py: Training and testing any models.
  • ModelNet40-C/configs.py: Hyperparameters for different models and dataloader.
  • ModelNet40-C/dataloader.py: Code for different variants of the dataloader.
  • ModelNet40-C/*_utils.py: Code for various utility functions.

Running Experiments

Training and Config files

To train or test any model, we use the main.py script. The format for running this script is as follows.

python main.py --exp-config <path to the config>

The config files are named as <protocol>_<model_name><_extra>_run_<seed>.yaml (<protocol> ∈ [dgcnn, pointnet2, rscnn]; <model_name> ∈ [dgcnn, pointnet2, rscnn, pointnet, simpleview] ). For example, the config file to run an experiment for PointNet++ in DGCNN protocol with seed 1 dgcnn_pointnet2_run_1.yaml. To run a new experiment with a different seed, you need to change the SEED parameter in the config file. All of our experiments are done based on seed 1.

We additionally leverage PointCutMix: configs/cutmix, PointMixup: configs/mixup, RSMix: configs/rsmix, and PGD-based adversarial training configs/pgd as the training-time config files.

For example, to train PCT with PointCutMix-R, please use the following command:

python main.py --exp-config configs/cutmix/pct_r.yaml

Evaluate a pretrained model

We provide pretrained models. They can be downloaded using the ./download.sh cor_exp and ./download.sh runs commands and are stored in the ModelNet40-C/runs (for data augmentation recipes) and ModelNet40-C/cor_exp (for standard trained models) folders. To test a pretrained model, the command is of the following format.

Additionally, we provide test-time config files in configs/bn and configs/tent for BN and TENT in our paper with the following commands:

python main.py --entry test --model-path <cor_exp/runs>/<cfg_name>/<model_name>.pth --exp-config configs/<cfg_name>.yaml

We list all the evaluation commands in the eval_cor.sh, eval_og.sh, eval_tent_cutmix.sh scripts. Note that in eval_cor.sh it is expected that pgd with PointNet++, RSCNN, and SimpleView do not have outputs since they do not fit the adversarial training framework. We have mentioned this in our paper.

Citation

Please cite our paper and SimpleView if you use our benchmark and analysis results. Thank you!

@article{sun2022benchmarking,
      title={Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions}, 
      author={Jiachen Sun and Qingzhao Zhang and Bhavya Kailkhura and Zhiding Yu and Chaowei Xiao and Z. Morley Mao},
      journal={arXiv preprint arXiv:2201.12296},
      year={2022}
}
@article{goyal2021revisiting,
  title={Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline},
  author={Goyal, Ankit and Law, Hei and Liu, Bowei and Newell, Alejandro and Deng, Jia},
  journal={International Conference on Machine Learning},
  year={2021}
}

References

[1] Zhang, Jinlai, et al. "PointCutMix: Regularization Strategy for Point Cloud Classification." arXiv preprint arXiv:2101.01461 (2021).

[2] Chen, Yunlu, et al. "Pointmixup: Augmentation for point clouds." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16. Springer International Publishing, 2020.

[3] Lee, Dogyoon, et al. "Regularization Strategy for Point Cloud via Rigidly Mixed Sample." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.

[4] Sun, Jiachen, et al. "Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions." Advances in Neural Information Processing Systems 34 (2021).

[5] Schneider, Steffen, et al. "Improving robustness against common corruptions by covariate shift adaptation." arXiv preprint arXiv:2006.16971 (2020).

[6] Wang, Dequan, et al. "Tent: Fully test-time adaptation by entropy minimization." arXiv preprint arXiv:2006.10726 (2020).

[7] Qi, Charles R., et al. "Pointnet: Deep learning on point sets for 3d classification and segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

[8] Qi, Charles R., et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space." arXiv preprint arXiv:1706.02413 (2017).

[9] Liu, Yongcheng, et al. "Relation-shape convolutional neural network for point cloud analysis." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[10] Wang, Yue, et al. "Dynamic graph cnn for learning on point clouds." Acm Transactions On Graphics (tog) 38.5 (2019): 1-12.

[11] Goyal, Ankit, et al. "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline." arXiv preprint arXiv:2106.05304 (2021).

[12] Guo, Meng-Hao, et al. "Pct: Point cloud transformer." Computational Visual Media 7.2 (2021): 187-199.

[13] Xiang, Tiange, et al. "Walk in the cloud: Learning curves for point clouds shape analysis." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

[14] Ma, Xu, et al. "Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework." arXiv preprint arXiv:2202.07123 (2022).

Comments
  • PGD-based adversarial training for Pointnet++

    PGD-based adversarial training for Pointnet++

    Hi, jiachen. In the paper, it says " PointNet++ and RSCNN leverage ball queries to find neighboring points, which will hinder the gradients from backward propagating to the original point cloud, making adversarial training inapplicable." I am wondering why the ball query operation hinder the gradients from backward propagating. If so, is the optimization of pointnet++ in standard training mode also be influenced? This makes me confused, hope to receive your reply!

    opened by xingbw 5
  • Using the code on Kitti datasets.

    Using the code on Kitti datasets.

    Hello, Thanks for your great work! I would like to ask why the layered phenomenon as shown in the figure occurs when the code is used to process Kitti datasets. Could you guide me here? Appreciate your work and the help. Thanks a lot! 截屏2022-07-12 下午12 06 34

    opened by kkkcx 3
  • About implementation for Metrics

    About implementation for Metrics

    Hi authors, thanks for your solid work!

    In the given codebase I have not found the implementation of evaluation metrics for modelnet40-c, i.e. error rate (ER) or class-wise mean error rate (mER). So where can I find the evaluation metric code in the repo?

    And in the Metric section of the paper, is the average operation missing here? image

    opened by xingbw 2
  • batchnorm error

    batchnorm error

    when I run python main.py --entry test --model-path runs/cutmix_r_pct_run_1/model_best_test.pth --exp-config configs/tent_cutmix/pct.yaml

    I got this error:

    ADAPT: ITER: 10 METHOD: tent AUG: BETA: 1.0 MIXUPRATE: 0.4 NAME: none PROB: 0.5 DATALOADER: MODELNET40_C: corruption: uniform severity: 1 test_data_path: ./data/modelnet40_c/ MODELNET40_DGCNN: num_points: 1024 test_data_path: ./data/modelnet40_ply_hdf5_2048/test_files.txt train_data_path: ./data/modelnet40_ply_hdf5_2048/train_files.txt valid_data_path: ./data/modelnet40_ply_hdf5_2048/train_files.txt MODELNET40_PN2: num_points: 1024 test_data_path: ./data/modelnet40_ply_hdf5_2048/test_files.txt train_data_path: ./data/modelnet40_ply_hdf5_2048/train_files.txt valid_data_path: ./data/modelnet40_ply_hdf5_2048/train_files.txt MODELNET40_RSCNN: data_path: ./data/ num_points: 1024 test_data_path: test_files.txt train_data_path: train_files.txt valid_data_path: train_files.txt batch_size: 32 num_workers: 0 EXP: DATASET: modelnet40_c EXP_ID: c_pct_run_1 LOSS_NAME: smooth METRIC: acc MODEL_NAME: pct OPTIMIZER: pct SEED: 1 TASK: cls EXP_EXTRA: no_test: False no_val: True save_ckp: 25 test_eval_freq: 1 val_eval_freq: 1 MODEL: MV: backbone: resnet18 feat_size: 16 PN2: version_cls: 1.0 RSCNN: ssn_or_msn: True TRAIN: early_stop: 300 l2: 0.0001 learning_rate: 0.0001 lr_clip: 1e-05 lr_decay_factor: 0.5 lr_reduce_patience: 10 num_epochs: 300 Pct( (model): Pct( (conv1): Conv1d(3, 64, kernel_size=(1,), stride=(1,), bias=False) (conv2): Conv1d(64, 64, kernel_size=(1,), stride=(1,), bias=False) (bn1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bn2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (gather_local_0): Local_op( (conv1): Conv1d(128, 128, kernel_size=(1,), stride=(1,), bias=False) (conv2): Conv1d(128, 128, kernel_size=(1,), stride=(1,), bias=False) (bn1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (gather_local_1): Local_op( (conv1): Conv1d(256, 256, kernel_size=(1,), stride=(1,), bias=False) (conv2): Conv1d(256, 256, kernel_size=(1,), stride=(1,), bias=False) (bn1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bn2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (pt_last): Point_Transformer_Last( (conv1): Conv1d(256, 256, kernel_size=(1,), stride=(1,), bias=False) (conv2): Conv1d(256, 256, kernel_size=(1,), stride=(1,), bias=False) (bn1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bn2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (sa1): SA_Layer( (q_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (k_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (v_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (trans_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (after_norm): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act): ReLU() (softmax): Softmax(dim=-1) ) (sa2): SA_Layer( (q_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (k_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (v_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (trans_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (after_norm): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act): ReLU() (softmax): Softmax(dim=-1) ) (sa3): SA_Layer( (q_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (k_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (v_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (trans_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (after_norm): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act): ReLU() (softmax): Softmax(dim=-1) ) (sa4): SA_Layer( (q_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (k_conv): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False) (v_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (trans_conv): Conv1d(256, 256, kernel_size=(1,), stride=(1,)) (after_norm): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act): ReLU() (softmax): Softmax(dim=-1) ) ) (conv_fuse): Sequential( (0): Conv1d(1280, 1024, kernel_size=(1,), stride=(1,), bias=False) (1): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.2) ) (linear1): Linear(in_features=1024, out_features=512, bias=False) (bn6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dp1): Dropout(p=0.5, inplace=False) (linear2): Linear(in_features=512, out_features=256, bias=True) (bn7): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dp2): Dropout(p=0.5, inplace=False) (linear3): Linear(in_features=256, out_features=40, bias=True) ) ) Recovering model and checkpoint from ~/ModelNet40-C/runs/cutmix_r_pct_run_1/model_best_test.pth Adaptation Done ... N/A% (0 of 78) | | Elapsed Time: 0:00:00 ETA: --:--:--Adaptation Done ... 1% (1 of 78) | | Elapsed Time: 0:00:01 ETA: 0:01:52Adaptation Done ... 2% (2 of 78) | | Elapsed Time: 0:00:03 ETA: 0:01:55Adaptation Done ... 3% (3 of 78) | | Elapsed Time: 0:00:04 ETA: 0:01:56Adaptation Done ... 5% (4 of 78) |# | Elapsed Time: 0:00:06 ETA: 0:01:52Adaptation Done ... 6% (5 of 78) |# | Elapsed Time: 0:00:07 ETA: 0:01:51Adaptation Done ... 7% (6 of 78) |# | Elapsed Time: 0:00:09 ETA: 0:01:55Adaptation Done ... 8% (7 of 78) |## | Elapsed Time: 0:00:10 ETA: 0:01:55Adaptation Done ... 10% (8 of 78) |## | Elapsed Time: 0:00:12 ETA: 0:01:48Adaptation Done ... 11% (9 of 78) |## | Elapsed Time: 0:00:13 ETA: 0:01:44Adaptation Done ... 12% (10 of 78) |### | Elapsed Time: 0:00:15 ETA: 0:01:41Adaptation Done ... 14% (11 of 78) |### | Elapsed Time: 0:00:16 ETA: 0:01:35Adaptation Done ... 15% (12 of 78) |### | Elapsed Time: 0:00:18 ETA: 0:01:34Adaptation Done ... 16% (13 of 78) |#### | Elapsed Time: 0:00:19 ETA: 0:01:36Adaptation Done ... 17% (14 of 78) |#### | Elapsed Time: 0:00:21 ETA: 0:01:38Adaptation Done ... 19% (15 of 78) |#### | Elapsed Time: 0:00:22 ETA: 0:01:39Adaptation Done ... 20% (16 of 78) |#### | Elapsed Time: 0:00:24 ETA: 0:01:35Adaptation Done ... 21% (17 of 78) |##### | Elapsed Time: 0:00:25 ETA: 0:01:31Adaptation Done ... 23% (18 of 78) |##### | Elapsed Time: 0:00:27 ETA: 0:01:31Adaptation Done ... 24% (19 of 78) |##### | Elapsed Time: 0:00:28 ETA: 0:01:31Adaptation Done ... 25% (20 of 78) |###### | Elapsed Time: 0:00:30 ETA: 0:01:27Adaptation Done ... 26% (21 of 78) |###### | Elapsed Time: 0:00:31 ETA: 0:01:24Adaptation Done ... 28% (22 of 78) |###### | Elapsed Time: 0:00:33 ETA: 0:01:25Adaptation Done ... 29% (23 of 78) |####### | Elapsed Time: 0:00:34 ETA: 0:01:22Adaptation Done ... 30% (24 of 78) |####### | Elapsed Time: 0:00:36 ETA: 0:01:17Adaptation Done ... 32% (25 of 78) |####### | Elapsed Time: 0:00:37 ETA: 0:01:16Adaptation Done ... 33% (26 of 78) |######## | Elapsed Time: 0:00:39 ETA: 0:01:19Adaptation Done ... 34% (27 of 78) |######## | Elapsed Time: 0:00:40 ETA: 0:01:15Adaptation Done ... 35% (28 of 78) |######## | Elapsed Time: 0:00:42 ETA: 0:01:13Adaptation Done ... 37% (29 of 78) |######## | Elapsed Time: 0:00:43 ETA: 0:01:08Adaptation Done ... 38% (30 of 78) |######### | Elapsed Time: 0:00:45 ETA: 0:01:06Adaptation Done ... 39% (31 of 78) |######### | Elapsed Time: 0:00:46 ETA: 0:01:12Adaptation Done ... 41% (32 of 78) |######### | Elapsed Time: 0:00:48 ETA: 0:01:08Adaptation Done ... 42% (33 of 78) |########## | Elapsed Time: 0:00:49 ETA: 0:01:01Adaptation Done ... 43% (34 of 78) |########## | Elapsed Time: 0:00:50 ETA: 0:01:02Adaptation Done ... 44% (35 of 78) |########## | Elapsed Time: 0:00:52 ETA: 0:01:03Adaptation Done ... 46% (36 of 78) |########### | Elapsed Time: 0:00:53 ETA: 0:01:00Adaptation Done ... 47% (37 of 78) |########### | Elapsed Time: 0:00:55 ETA: 0:01:00Adaptation Done ... 48% (38 of 78) |########### | Elapsed Time: 0:00:56 ETA: 0:01:01Adaptation Done ... 50% (39 of 78) |############ | Elapsed Time: 0:00:58 ETA: 0:01:00Adaptation Done ... 51% (40 of 78) |############ | Elapsed Time: 0:01:00 ETA: 0:00:59Adaptation Done ... 52% (41 of 78) |############ | Elapsed Time: 0:01:01 ETA: 0:00:57Adaptation Done ... 53% (42 of 78) |############ | Elapsed Time: 0:01:03 ETA: 0:00:54Adaptation Done ... 55% (43 of 78) |############# | Elapsed Time: 0:01:04 ETA: 0:00:52Adaptation Done ... 56% (44 of 78) |############# | Elapsed Time: 0:01:06 ETA: 0:00:51Adaptation Done ... 57% (45 of 78) |############# | Elapsed Time: 0:01:07 ETA: 0:00:49Adaptation Done ... 58% (46 of 78) |############## | Elapsed Time: 0:01:09 ETA: 0:00:47Adaptation Done ... 60% (47 of 78) |############## | Elapsed Time: 0:01:10 ETA: 0:00:47Adaptation Done ... 61% (48 of 78) |############## | Elapsed Time: 0:01:12 ETA: 0:00:46Adaptation Done ... 62% (49 of 78) |############### | Elapsed Time: 0:01:13 ETA: 0:00:41Adaptation Done ... 64% (50 of 78) |############### | Elapsed Time: 0:01:14 ETA: 0:00:39Adaptation Done ... 65% (51 of 78) |############### | Elapsed Time: 0:01:16 ETA: 0:00:36Adaptation Done ... 66% (52 of 78) |################ | Elapsed Time: 0:01:17 ETA: 0:00:32Adaptation Done ... 67% (53 of 78) |################ | Elapsed Time: 0:01:18 ETA: 0:00:30Adaptation Done ... 69% (54 of 78) |################ | Elapsed Time: 0:01:19 ETA: 0:00:30Adaptation Done ... 70% (55 of 78) |################ | Elapsed Time: 0:01:21 ETA: 0:00:29Adaptation Done ... 71% (56 of 78) |################# | Elapsed Time: 0:01:22 ETA: 0:00:29Adaptation Done ... 73% (57 of 78) |################# | Elapsed Time: 0:01:24 ETA: 0:00:31Adaptation Done ... 74% (58 of 78) |################# | Elapsed Time: 0:01:25 ETA: 0:00:30Adaptation Done ... 75% (59 of 78) |################## | Elapsed Time: 0:01:27 ETA: 0:00:27Adaptation Done ... 76% (60 of 78) |################## | Elapsed Time: 0:01:28 ETA: 0:00:26Adaptation Done ... 78% (61 of 78) |################## | Elapsed Time: 0:01:30 ETA: 0:00:25Adaptation Done ... 79% (62 of 78) |################### | Elapsed Time: 0:01:31 ETA: 0:00:23Adaptation Done ... 80% (63 of 78) |################### | Elapsed Time: 0:01:33 ETA: 0:00:21Adaptation Done ... 82% (64 of 78) |################### | Elapsed Time: 0:01:34 ETA: 0:00:21Adaptation Done ... 83% (65 of 78) |#################### | Elapsed Time: 0:01:36 ETA: 0:00:19Adaptation Done ... 84% (66 of 78) |#################### | Elapsed Time: 0:01:37 ETA: 0:00:18Adaptation Done ... 85% (67 of 78) |#################### | Elapsed Time: 0:01:39 ETA: 0:00:16Adaptation Done ... 87% (68 of 78) |#################### | Elapsed Time: 0:01:40 ETA: 0:00:13Adaptation Done ... 88% (69 of 78) |##################### | Elapsed Time: 0:01:42 ETA: 0:00:13Adaptation Done ... 89% (70 of 78) |##################### | Elapsed Time: 0:01:43 ETA: 0:00:12Adaptation Done ... 91% (71 of 78) |##################### | Elapsed Time: 0:01:44 ETA: 0:00:10Adaptation Done ... 92% (72 of 78) |###################### | Elapsed Time: 0:01:46 ETA: 0:00:09Adaptation Done ... 93% (73 of 78) |###################### | Elapsed Time: 0:01:48 ETA: 0:00:07Adaptation Done ... 94% (74 of 78) |###################### | Elapsed Time: 0:01:49 ETA: 0:00:05Adaptation Done ... 96% (75 of 78) |####################### | Elapsed Time: 0:01:51 ETA: 0:00:04Adaptation Done ... 97% (76 of 78) |####################### | Elapsed Time: 0:01:52 ETA: 0:00:02Traceback (most recent call last): File "main.py", line 684, in entry_test(cfg, test_or_valid, cmd_args.model_path,cmd_args.confusion) File "main.py", line 572, in entry_test test_perf = validate(cfg.EXP.TASK, loader_test, model, cfg.EXP.DATASET, cfg.ADAPT, confusion) File "main.py", line 228, in validate model = adapt_tent(inp,model,adapt) File "main.py", line 45, in adapt_tent tent_helper.forward_and_adapt(data,model,optimizer_tent) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 96, in decorate_enable_grad return func(*args, **kwargs) File "~/ModelNet40-C/third_party/tent_helper.py", line 52, in forward_and_adapt outputs = model(**x) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in replica 0 on device 0. Original Traceback (most recent call last): File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "~/ModelNet40-C/models/pct.py", line 29, in forward logit = self.model(pc) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "~/ModelNet40-C/PCT_Pytorch/model.py", line 69, in forward x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 107, in forward exponential_average_factor, self.eps) File "~/miniconda3/envs/modelnetc/lib/python3.7/site-packages/torch/nn/functional.py", line 1666, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512])

    opened by dirtycomputer 2
  • Error occured when generating corruptions

    Error occured when generating corruptions

    Hi, thank you for your excellent paper and code release. I tried to generate ModelNet40-C but when I run generate_c.py, an error occured.

    Traceback (most recent call last):
      File "data/generate_c.py", line 401, in <module>
        save_data(data[:,index,:], cor, sev)
      File "data/generate_c.py", line 360, in save_data
        new_data.append(MAP[corruption](data[i],severity))
    TypeError: occlusion() takes 1 positional argument but 2 were given
    

    I saw you comment some lines in generate_c.py, then can you give me some guides for 'occlusion' and 'lidar' corruptions?

    # if cor in ['occlusion', 'lidar','original']:
            #     continue
    

    And, if I want to use this dataset for benchmark, should I download ModelNet40-C directly using ./download.sh modelnet40_c rather than generate myself?

    Thank you!

    opened by NahyukLEE 2
  • Please CITE GDANet at the end of README.

    Please CITE GDANet at the end of README.

    Hi authors,

    We are the authors of GDANet.

    Since you use our code in your repo, it is necessary for you to cite us following the other works, as shown at the end of your README.

    Thanks, Mino

    opened by mutianxu 1
  • Clean the directory structure

    Clean the directory structure

    • Static data (e.g., dataset, pretrained models) move to a folder like data.
    • Dynamic data (e.g., saved execution results) move to a folder like result.
    • Executable code (e.g., python with main) move to a folder like src.
    • Libraries and modules (e.g., third-party tools/repos) move to sub-directories in src.
    • Configuration files (looks good now).
    • Script tools (*.sh files) move to scripts or in the root directory.
    opened by zqzqz 0
Releases(v1.0)
Owner
Jiachen Sun
CSE Ph.D. @ Univ. of Michigan, Ann Arbor
Jiachen Sun
Simulation-based performance analysis of server-less Blockchain-enabled Federated Learning

Blockchain-enabled Server-less Federated Learning Repository containing the files used to reproduce the results of the publication "Blockchain-enabled

Francesc Wilhelmi 9 Sep 27, 2022
1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

Lihe Yang 209 Jan 01, 2023
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

ISC21-Descriptor-Track-1st The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track. You can check our solution

lyakaap 73 Dec 24, 2022
最新版本yolov5+deepsort目标检测和追踪,支持5.0版本可训练自己数据集

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

422 Dec 30, 2022
chainladder - Property and Casualty Loss Reserving in Python

chainladder (python) chainladder - Property and Casualty Loss Reserving in Python This package gets inspiration from the popular R ChainLadder package

Casualty Actuarial Society 130 Dec 07, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 05, 2023
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 01, 2023
An algorithm study of the 6th iOS 10 set of Boost Camp Web Mobile

알고리즘 스터디 🔥 부스트캠프 웹모바일 6기 iOS 10조의 알고리즘 스터디 입니다. 개인적인 사정 등으로 S034, S055만 참가하였습니다. 스터디 목적 상진: 코테 합격 + 부캠끝나고 아침에 일어나기 위해 필요한 사이클 기완: 꾸준하게 자리에 앉아 공부하기 +

2 Jan 11, 2022
Meta Self-learning for Multi-Source Domain Adaptation: A Benchmark

Meta Self-Learning for Multi-Source Domain Adaptation: A Benchmark Project | Arxiv | YouTube | | Abstract In recent years, deep learning-based methods

CVSM Group - email: <a href=[email protected]"> 188 Dec 12, 2022
PyG (PyTorch Geometric) - A library built upon PyTorch to easily write and train Graph Neural Networks (GNNs)

PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.

PyG 16.5k Jan 08, 2023
Dynamic Attentive Graph Learning for Image Restoration, ICCV2021 [PyTorch Code]

Dynamic Attentive Graph Learning for Image Restoration This repository is for GATIR introduced in the following paper: Chong Mou, Jian Zhang, Zhuoyuan

Jian Zhang 84 Dec 09, 2022
Ontologysim: a Owlready2 library for applied production simulation

Ontologysim: a Owlready2 library for applied production simulation Ontologysim is an open-source deep production simulation framework, with an emphasi

10 Nov 30, 2022
[arXiv'22] Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation

Panoptic NeRF Project Page | Paper | Dataset Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation Xiao Fu*, Shangzhan zhang*,

Xiao Fu 111 Dec 16, 2022
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages

PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages Abstract NLP applications for code-mixed (CM) or mix-li

Mohsin Ali, Mohammed 1 Nov 12, 2021
Creating a Linear Program Solver by Implementing the Simplex Method in Python with NumPy

Creating a Linear Program Solver by Implementing the Simplex Method in Python with NumPy Simplex Algorithm is a popular algorithm for linear programmi

Reda BELHAJ 2 Oct 12, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 07, 2023
Semantic segmentation task for ADE20k & cityscapse dataset, based on several models.

semantic-segmentation-tensorflow This is a Tensorflow implementation of semantic segmentation models on MIT ADE20K scene parsing dataset and Cityscape

HsuanKung Yang 83 Oct 13, 2022