Implementation of the Point Transformer layer, in Pytorch

Overview

Point Transformer - Pytorch

Implementation of the Point Transformer self-attention layer, in Pytorch. The simple circuit above seemed to have allowed their group to outperform all previous methods in point cloud classification and segmentation.

Install

$ pip install point-transformer-pytorch

Usage

import torch
from point_transformer_pytorch import PointTransformerLayer

attn = PointTransformerLayer(
    dim = 128,
    pos_mlp_hidden_dim = 64,
    attn_mlp_hidden_mult = 4
)

x = torch.randn(1, 16, 128)
pos = torch.randn(1, 16, 3)

attn(x, pos) # (1, 16, 128)

Citations

@misc{zhao2020point,
    title={Point Transformer}, 
    author={Hengshuang Zhao and Li Jiang and Jiaya Jia and Philip Torr and Vladlen Koltun},
    year={2020},
    eprint={2012.09164},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Comments
  • Did You Falsify Your Experimental Results???

    Did You Falsify Your Experimental Results???

    No one can reproduce the performance reported in your original paper. Please post your pre-trained model or your original code. Otherwise, we must question your academic ethics!****

    opened by TruthIsEveryThing 1
  • Issues with my wrapper code

    Issues with my wrapper code

    I wrote some wrapper code to turn this layer into a full transformer and I can't seem to figure out what is going wrong. The following works:

    import torch
    from torch import nn, einsum
    import x_transformers
    from point_transformer_pytorch import PointTransformerLayer
    
    layer = PointTransformerLayer(
        dim = 7,
        pos_mlp_hidden_dim = 64,
        attn_mlp_hidden_mult = 4,
        num_neighbors = 16          # only the 16 nearest neighbors would be attended to for each point
    )
    
    feats = torch.randn(1, 5, 7)
    pos = torch.randn(1, 5, 3)
    mask = torch.ones(1, 5).bool()
    
    y = layer(feats, pos, mask = mask)
    

    However this doesn't work

    import torch
    from torch import nn, einsum
    import x_transformers
    from point_transformer_pytorch import PointTransformerLayer
    
    class PointTransformer(nn.Module):
        def __init__(self, feats, mask, neighbors = 16, layers=5, dimension=5):
            
            super().__init__()
            
            self.feats = feats
            self.mask = mask
            self.neighbors = neighbors
            
            self.layers = []
            
            for _ in range(layers):
                self.layers.append(PointTransformerLayer(
                    dim = dimension,
                    pos_mlp_hidden_dim = 64,
                    attn_mlp_hidden_mult = 4,
                    num_neighbors = self.neighbors
                ))
    
        def forward(self, pos):
            curr_pos = pos
            for layer in self.layers:
                print(curr_pos)
                curr_pos = layer(self.feats, pos, self.mask)
                print("----")
            return curr_pos
    
    model = PointTransformer(feats, mask)
    model(pos)
    

    The error I'm getting is mat1 and mat2 shapes cannot be multiplied (5x7 and 5x15)

    opened by StellaAthena 1
  • point clouds with different number of points

    point clouds with different number of points

    Great job! I have a question about the number of the points in the point cloud. Do you have any suggestion to deal with point clouds with different point. As I know, point cloud models are always applied in Shapenet which contains point clouds with 2048 points. So what can we do if the number of the point clouds is not constant?

    opened by 1999kevin 0
  • Scalar attention or vector attention in the multi-head variant

    Scalar attention or vector attention in the multi-head variant

    It seems that the implementation of the multi-head point transformer produces scalar attention scores for each head.

    https://github.com/lucidrains/point-transformer-pytorch/blob/99bc3958138d8c9d3b882e4ac50b1a18a86160fe/point_transformer_pytorch/multihead_point_transformer_pytorch.py#L62

    opened by ZikangZhou 2
  • The layer structure and mask

    The layer structure and mask

    Hi,

    Thanks for this contribution. In the implementation of attn_mlp the first linear layer increases the dimension. Is this a standard practice because I did not find any details about this in the paper. Also paper also does not describe use of mask, is this again some standard practice for attention layers?

    Thanks!!

    opened by ayushais 1
  • Invariant to cardinality?

    Invariant to cardinality?

    Dear Authors, In your paper you wrote: "The layer is invariant to permutation and cardinality and is thus inherently suited to point cloud processing."

    I do not understand this statement, because your PointTransformerLayer https://github.com/lucidrains/point-transformer-pytorch/blob/main/point_transformer_pytorch/point_transformer_pytorch.py#L31 requires the dim parameter in initialization. So it always expects dim elements in input. What if a point cloud has dim+1 points?

    Thank you in advance.

    opened by decadenza 0
  • Cost too much memory

    Cost too much memory

    I'm not sure whether I used the point-transformer correctly: I just implemented one block for training, and the data shape of (x, pos) in each gpu are both [16, 2048, 3], later I was informed that my gpu is running out of the memory(11.77 GB total capacity)

    opened by JLU-Neal 9
Releases(0.1.5)
Owner
Phil Wang
Working with Attention. It's all we need.
Phil Wang
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
Image Captioning using CNN and Transformers

Image-Captioning Keras/Tensorflow Image Captioning application using CNN and Transformer as encoder/decoder. In particulary, the architecture consists

24 Dec 28, 2022
Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning

PyTorch implementation of BERT and PALs Introduction Work by Asa Cooper Stickland and Iain Murray, University of Edinburgh. Code for BERT and PALs; mo

Asa Cooper Stickland 70 Dec 29, 2022
PyTorch implementation of EigenGAN

PyTorch Implementation of EigenGAN Train python train.py [image_folder_path] --name [experiment name] Test python test.py [ckpt path] --traverse FFH

62 Nov 12, 2022
Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques

Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques This repository is derived from the NMTGMinor

Tu Anh Dinh 1 Sep 07, 2022
Realtime micro-expression recognition using OpenCV and PyTorch

Micro-expression Recognition Realtime micro-expression recognition from scratch using OpenCV and PyTorch Try it out with a webcam or video using the e

Irfan 35 Dec 05, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
PyTorch implementation of the ExORL: Exploratory Data for Offline Reinforcement Learning

ExORL: Exploratory Data for Offline Reinforcement Learning This is an original PyTorch implementation of the ExORL framework from Don't Change the Alg

Denis Yarats 52 Jan 01, 2023
Pywonderland - A tour in the wonderland of math with python.

A Tour in the Wonderland of Math with Python A collection of python scripts for drawing beautiful figures and animating interesting algorithms in math

Zhao Liang 4.1k Jan 03, 2023
TensorFlow implementation of PHM (Parameterization of Hypercomplex Multiplication)

Parameterization of Hypercomplex Multiplications (PHM) This repository contains the TensorFlow implementation of PHM (Parameterization of Hypercomplex

Aston Zhang 9 Oct 26, 2022
Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation

Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation (CVPR2019) This is a pytorch implementatio

Yawei Luo 280 Jan 01, 2023
Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers.

Less is More: Pay Less Attention in Vision Transformers Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers. By

73 Jan 01, 2023
A chemical analysis of lipophilicities & molecule drawings including ML

A chemical analysis of lipophilicity & molecule drawings including a bit of ML analysis. This is a simple project that includes two Jupyter files (one

Aurimas A. NausÄ—das 7 Nov 22, 2022
Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Sami BARCHID 2 Oct 20, 2022
A Python library that provides a simplified alternative to DBAPI 2

A Python library that provides a simplified alternative to DBAPI 2. It provides a facade in front of DBAPI 2 drivers.

Tony Locke 44 Nov 17, 2021
A PoC Corporation Relationship Knowledge Graph System on top of Nebula Graph.

Corp-Rel is a PoC of Corpartion Relationship Knowledge Graph System. It's built on top of the Open Source Graph Database: Nebula Graph with a dataset

Wey Gu 20 Dec 11, 2022
This toolkit provides codes to download and pre-process the SLUE datasets, train the baseline models, and evaluate SLUE tasks.

slue-toolkit We introduce Spoken Language Understanding Evaluation (SLUE) benchmark. This toolkit provides codes to download and pre-process the SLUE

ASAPP Research 39 Sep 21, 2022
Mahadi-Now - This Is Pakistani Just Now Login Tools

PAKISTANI JUST NOW LOGIN TOOLS Install apt update apt upgrade apt install python

MAHADI HASAN AFRIDI 19 Apr 06, 2022