Source code for the paper "TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations"

Overview

TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations

Created by Jiahao Pang, Duanshun Li, and Dong Tian from InterDigital

framework

Introduction

This repository contains the implementation of our TearingNet paper accepted in CVPR 2021. Given a point cloud dataset containing objects with various genera, or scenes with multiple objects, we propose the TearingNet, which is an autoencoder tackling the challenging task of representing the point clouds using a fixed-length descriptor. Unlike existing works directly deforming predefined primitives of genus zero (e.g., a 2D square patch) to an object-level point cloud, our TearingNet is characterized by a proposed Tearing network module and a Folding network module interacting with each other iteratively. Particularly, the Tearing network module learns the point cloud topology explicitly. By breaking the edges of a primitive graph, it tears the graph into patches or with holes to emulate the topology of a target point cloud, leading to faithful reconstructions.

Installation

  • We use Python 3.6, PyTorch 1.3.1 and CUDA 10.0, example commands to set up a virtual environment with anaconda are:
conda create tearingnet python=3.6
conda activate tearingnet
conda install pytorch=1.3.1 torchvision=0.4.2 cudatoolkit=10.0 -c pytorch 
conda install -c open3d-admin open3d
conda install -c conda-forge tensorboardx
conda install -c anaconda h5py

Data Preparation

KITTI Multi-Object Dataset

  • Our KITTI Multi-Object (KIMO) Dataset is constructed with kitti_dataset.py of PCDet (commit 95d2ab5). Please clone and install PCDet, then prepare the KITTI dataset according to their instructions.
  • Assume the name of the cloned folder is PCDet, please replace the create_groundtruth_database() function in kitti_dataset.py by our modified one provided in TearingNet/util/pcdet_create_grouth_database.py.
  • Prepare the KITTI dataset, then generate the data infos according to the instructions in the README.md of PCDet.
  • Create the folders TearingNet/dataset and TearingNet/dataset/kittimulobj then put the newly-generated folder PCDet/data/kitti/kitti_single under TearingNet/dataset/kittimulobj. Also, put the newly-generated file PCDet/data/kitti/kitti_dbinfos_object.pkl under the TearingNet/dataset/kittimulobj folder.
  • Instead of assembling several single-object point clouds together and write down as a multi-object point cloud, we generate the parameters that parameterize the multi-object point clouds then assemble them on the fly during training/testing. To obtain the parameters, run our prepared scripts as follows under the TearingNet folder. These scripts generate the training and testing splits of the KIMO-5 dataset:
./scripts/launch.sh ./scripts/gen_data/gen_kitti_mulobj_train_5x5.sh
./scripts/launch.sh ./scripts/gen_data/gen_kitti_mulobj_test_5x5.sh
  • The file structure of the KIMO dataset after these steps becomes:
kittimulobj
      ├── kitti_dbinfos_object.pkl
      ├── kitti_mulobj_param_test_5x5_2048.pkl
      ├── kitti_mulobj_param_train_5x5_2048.pkl
      └── kitti_single
              ├── 0_0_Pedestrian.bin
              ├── 1000_0_Car.bin
              ├── 1000_1_Car.bin
              ├── 1000_2_Van.bin
              ...

CAD Model Multi-Object Dataset

dataset
    ├── cadmulobj
    ├── kittimulobj
    ├── modelnet40
    │       └── modelnet40_ply_hdf5_2048
    │                   ├── ply_data_test0.h5
    │                   ├── ply_data_test_0_id2file.json
    │                   ├── ply_data_test1.h5
    │                   ├── ply_data_test_1_id2file.json
    │                   ...
    └── shapenet_part
            ├── shapenetcore_partanno_segmentation_benchmark_v0
            │   ├── 02691156
            │   │   ├── points
            │   │   │   ├── 1021a0914a7207aff927ed529ad90a11.pts
            │   │   │   ├── 103c9e43cdf6501c62b600da24e0965.pts
            │   │   │   ├── 105f7f51e4140ee4b6b87e72ead132ed.pts
            ...
  • Extract the "person", "car", "cone" and "plant" models from ModelNet40, and the "motorbike" models from the ShapeNet part dataset, by running the following Python script under the TearingNet folder:
python util/cad_models_collector.py
  • The previous step generates the file TearingNet/dataset/cadmulobj/cad_models.npy, based on which we generate the parameters for the CAMO dataset. To do so, launch the following scripts:
./scripts/launch.sh ./scripts/gen_data/gen_cad_mulobj_train_5x5.sh
./scripts/launch.sh ./scripts/gen_data/gen_cad_mulobj_test_5x5.sh
  • The file structure of the CAMO dataset after these steps becomes:
cadmulobj
    ├── cad_models.npy
    ├── cad_mulobj_param_test_5x5.npy
    └── cad_mulobj_param_train_5x5.npy

Experiments

Training

We employ a two-stage training strategy to train the TearingNet. The first step is to train a FoldingNet (E-Net & F-Net in paper). Take the KIMO dataset as an example, launch the following scripts under the TearingNet folder:

./scripts/launch.sh ./scripts/experiments/train_folding_kitti.sh

Having finished the first step, a pretrained model will be saved in TearingNet/results/train_folding_kitti. To load the pretrained FoldingNet into a TearingNet configuration and perform training, launch the following scripts:

./scripts/launch.sh ./scripts/experiments/train_tearing_kitti.sh

To see the meanings of the parameters in train_folding_kitti.sh and train_tearing_kitti.sh, check the Python script TearinNet/util/option_handler.py.

Reconstruction

To perform the reconstruction experiment with the trained model, launch the following scripts:

./scripts/launch.sh ./scripts/experiments/reconstruction.sh

One may write down the reconstructions in PLY format by setting a positive PC_WRITE_FREQ value. Again, please refer to TearinNet/util/option_handler.py for the meanings of individual parameters.

Counting

To perform the counting experiment with the trained model, launch the following scripts:

./scripts/launch.sh ./scripts/experiments/counting.sh

Citing this Work

Please cite our work if you find it useful for your research:

@inproceedings{pang2021tearingnet, 
    title={TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations}, 
    author={Pang, Jiahao and Li, Duanshun, and Tian, Dong}, 
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 
    year={2021}
}

Related Projects

torus interpolation

Owner
InterDigital
InterDigital
Code voor mijn Master project omtrent VideoBERT

Code voor masterproef Deze repository bevat de code voor het project van mijn masterproef omtrent VideoBERT. De code in deze repository is gebaseerd o

35 Oct 18, 2021
Resources for "Natural Language Processing" Coursera course.

Natural Language Processing course resources This github contains practical assignments for Natural Language Processing course by Higher School of Eco

Advanced Machine Learning specialisation by HSE 1.1k Jan 01, 2023
Transformers Wav2Vec2 + Parlance's CTCDecodeTransformers Wav2Vec2 + Parlance's CTCDecode

🤗 Transformers Wav2Vec2 + Parlance's CTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with Parlance's ctcdecode

Patrick von Platen 9 Jul 21, 2022
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers

PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers

Microsoft 105 Jan 08, 2022
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022
Code to reproduce the results of the paper 'Towards Realistic Few-Shot Relation Extraction' (EMNLP 2021)

Realistic Few-Shot Relation Extraction This repository contains code to reproduce the results in the paper "Towards Realistic Few-Shot Relation Extrac

Bloomberg 8 Nov 09, 2022
leaking paid token generator that was a shit lmao for 100$ haha

Discord-Token-Generator-Leaked leaking paid token generator that was a shit lmao for 100$ he selling it for 100$ wth here the code enjoy don't forget

Keevo 5 Apr 15, 2022
💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 50 Dec 21, 2022
Submit issues and feature requests for our API here.

AIx GPT API Submit issues and feature requests for our API here. See https://apps.aixsolutionsgroup.com for more info. Python Quick Start pip install

AIx Solutions 7 Mar 27, 2022
aMLP Transformer Model for Japanese

aMLP-japanese Japanese aMLP Pretrained Model aMLPとは、Liu, Daiらが提案する、Transformerモデルです。 ざっくりというと、BERTの代わりに使えて、より性能の良いモデルです。 詳しい解説は、こちらの記事などを参考にしてください。 この

tanreinama 13 Aug 11, 2022
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器

ja-timex 自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器 概要 ja-timex は、現代日本語で書かれた自然文に含まれる時間情報表現を抽出しTIMEX3と呼ばれるアノテーション仕様に変換することで、プログラムが利用できるような形に規格化するルールベースの解析器です。

Yuki Okuda 116 Nov 09, 2022
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

342 Nov 21, 2022
Practical Machine Learning with Python

Master the essential skills needed to recognize and solve complex real-world problems with Machine Learning and Deep Learning by leveraging the highly popular Python Machine Learning Eco-system.

Dipanjan (DJ) Sarkar 2k Jan 08, 2023
TFPNER: Exploration on the Named Entity Recognition of Token Fused with Part-of-Speech

TFPNER TFPNER: Exploration on the Named Entity Recognition of Token Fused with Part-of-Speech Named entity recognition (NER), which aims at identifyin

1 Feb 07, 2022
Random-Word-Generator - Generates meaningful words from dictionary with given no. of letters and words.

Random Word Generator Generates meaningful words from dictionary with given no. of letters and words. This might be useful for generating short links

Mohammed Rabil 1 Jan 01, 2022
StarGAN - Official PyTorch Implementation

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Dec 30, 2022
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 01, 2022
Python library for interactive topic model visualization. Port of the R LDAvis package.

pyLDAvis Python library for interactive topic model visualization. This is a port of the fabulous R package by Carson Sievert and Kenny Shirley. pyLDA

Ben Mabey 1.7k Dec 20, 2022
This is a GUI program that will generate a word search puzzle image

Word Search Puzzle Generator Table of Contents About The Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing Cont

11 Feb 22, 2022
Simple multilingual lemmatizer for Python, especially useful for speed and efficiency

Simplemma: a simple multilingual lemmatizer for Python Purpose Lemmatization is the process of grouping together the inflected forms of a word so they

Adrien Barbaresi 70 Dec 29, 2022