Implementation of the Point Transformer layer, in Pytorch

Overview

Point Transformer - Pytorch

Implementation of the Point Transformer self-attention layer, in Pytorch. The simple circuit above seemed to have allowed their group to outperform all previous methods in point cloud classification and segmentation.

Install

$ pip install point-transformer-pytorch

Usage

import torch
from point_transformer_pytorch import PointTransformerLayer

attn = PointTransformerLayer(
    dim = 128,
    pos_mlp_hidden_dim = 64,
    attn_mlp_hidden_mult = 4
)

x = torch.randn(1, 16, 128)
pos = torch.randn(1, 16, 3)

attn(x, pos) # (1, 16, 128)

Citations

@misc{zhao2020point,
    title={Point Transformer}, 
    author={Hengshuang Zhao and Li Jiang and Jiaya Jia and Philip Torr and Vladlen Koltun},
    year={2020},
    eprint={2012.09164},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Comments
  • Did You Falsify Your Experimental Results???

    Did You Falsify Your Experimental Results???

    No one can reproduce the performance reported in your original paper. Please post your pre-trained model or your original code. Otherwise, we must question your academic ethics!****

    opened by TruthIsEveryThing 1
  • Issues with my wrapper code

    Issues with my wrapper code

    I wrote some wrapper code to turn this layer into a full transformer and I can't seem to figure out what is going wrong. The following works:

    import torch
    from torch import nn, einsum
    import x_transformers
    from point_transformer_pytorch import PointTransformerLayer
    
    layer = PointTransformerLayer(
        dim = 7,
        pos_mlp_hidden_dim = 64,
        attn_mlp_hidden_mult = 4,
        num_neighbors = 16          # only the 16 nearest neighbors would be attended to for each point
    )
    
    feats = torch.randn(1, 5, 7)
    pos = torch.randn(1, 5, 3)
    mask = torch.ones(1, 5).bool()
    
    y = layer(feats, pos, mask = mask)
    

    However this doesn't work

    import torch
    from torch import nn, einsum
    import x_transformers
    from point_transformer_pytorch import PointTransformerLayer
    
    class PointTransformer(nn.Module):
        def __init__(self, feats, mask, neighbors = 16, layers=5, dimension=5):
            
            super().__init__()
            
            self.feats = feats
            self.mask = mask
            self.neighbors = neighbors
            
            self.layers = []
            
            for _ in range(layers):
                self.layers.append(PointTransformerLayer(
                    dim = dimension,
                    pos_mlp_hidden_dim = 64,
                    attn_mlp_hidden_mult = 4,
                    num_neighbors = self.neighbors
                ))
    
        def forward(self, pos):
            curr_pos = pos
            for layer in self.layers:
                print(curr_pos)
                curr_pos = layer(self.feats, pos, self.mask)
                print("----")
            return curr_pos
    
    model = PointTransformer(feats, mask)
    model(pos)
    

    The error I'm getting is mat1 and mat2 shapes cannot be multiplied (5x7 and 5x15)

    opened by StellaAthena 1
  • point clouds with different number of points

    point clouds with different number of points

    Great job! I have a question about the number of the points in the point cloud. Do you have any suggestion to deal with point clouds with different point. As I know, point cloud models are always applied in Shapenet which contains point clouds with 2048 points. So what can we do if the number of the point clouds is not constant?

    opened by 1999kevin 0
  • Scalar attention or vector attention in the multi-head variant

    Scalar attention or vector attention in the multi-head variant

    It seems that the implementation of the multi-head point transformer produces scalar attention scores for each head.

    https://github.com/lucidrains/point-transformer-pytorch/blob/99bc3958138d8c9d3b882e4ac50b1a18a86160fe/point_transformer_pytorch/multihead_point_transformer_pytorch.py#L62

    opened by ZikangZhou 2
  • The layer structure and mask

    The layer structure and mask

    Hi,

    Thanks for this contribution. In the implementation of attn_mlp the first linear layer increases the dimension. Is this a standard practice because I did not find any details about this in the paper. Also paper also does not describe use of mask, is this again some standard practice for attention layers?

    Thanks!!

    opened by ayushais 1
  • Invariant to cardinality?

    Invariant to cardinality?

    Dear Authors, In your paper you wrote: "The layer is invariant to permutation and cardinality and is thus inherently suited to point cloud processing."

    I do not understand this statement, because your PointTransformerLayer https://github.com/lucidrains/point-transformer-pytorch/blob/main/point_transformer_pytorch/point_transformer_pytorch.py#L31 requires the dim parameter in initialization. So it always expects dim elements in input. What if a point cloud has dim+1 points?

    Thank you in advance.

    opened by decadenza 0
  • Cost too much memory

    Cost too much memory

    I'm not sure whether I used the point-transformer correctly: I just implemented one block for training, and the data shape of (x, pos) in each gpu are both [16, 2048, 3], later I was informed that my gpu is running out of the memory(11.77 GB total capacity)

    opened by JLU-Neal 9
Releases(0.1.5)
Owner
Phil Wang
Working with Attention. It's all we need.
Phil Wang
Bald-to-Hairy Translation Using CycleGAN

GANiry: Bald-to-Hairy Translation Using CycleGAN Official PyTorch implementation of GANiry. GANiry: Bald-to-Hairy Translation Using CycleGAN, Fidan Sa

Fidan Samet 10 Oct 27, 2022
GT4SD, an open-source library to accelerate hypothesis generation in the scientific discovery process.

The GT4SD (Generative Toolkit for Scientific Discovery) is an open-source platform to accelerate hypothesis generation in the scientific discovery process. It provides a library for making state-of-t

Generative Toolkit 4 Scientific Discovery 142 Dec 24, 2022
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Deploy optimized transformer based models on Nvidia Triton server

Deploy optimized transformer based models on Nvidia Triton server

Lefebvre Sarrut Services 1.2k Jan 05, 2023
ML models and internal tensors 3D visualizer

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to ope

Zetane Systems 787 Dec 30, 2022
This repo contains the official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS

FaceAPI AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using

Vladimir Mandic 395 Dec 29, 2022
This project provides a stock market environment using OpenGym with Deep Q-learning and Policy Gradient.

Stock Trading Market OpenAI Gym Environment with Deep Reinforcement Learning using Keras Overview This project provides a general environment for stoc

Kim, Ki Hyun 769 Dec 25, 2022
For auto aligning, cropping, and scaling HR and LR images for training image based neural networks

ImgAlign For auto aligning, cropping, and scaling HR and LR images for training image based neural networks Usage Make sure OpenCV is installed, 'pip

15 Dec 04, 2022
EMNLP 2020 - Summarizing Text on Any Aspects

Summarizing Text on Any Aspects This repo contains preliminary code of the following paper: Summarizing Text on Any Aspects: A Knowledge-Informed Weak

Bowen Tan 35 Nov 14, 2022
GenshinMapAutoMarkTools - Tools To add/delete/refresh resources mark in Genshin Impact Map

使用说明 适配 windows7以上 64位 原神1920x1080窗口(其他分辨率后续适配) 待更新渊下宫 English version is to be

Zero_Circle 209 Dec 28, 2022
Accelerating BERT Inference for Sequence Labeling via Early-Exit

Sequence-Labeling-Early-Exit Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit Requirement: Please refer to re

李孝男 23 Oct 14, 2022
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
PiRank: Learning to Rank via Differentiable Sorting

PiRank: Learning to Rank via Differentiable Sorting This repository provides a reference implementation for learning PiRank-based models as described

54 Dec 17, 2022
Repository for GNSS-based position estimation using a Deep Neural Network

Code repository accompanying our work on 'Improving GNSS Positioning using Neural Network-based Corrections'. In this paper, we present a Deep Neural

32 Dec 13, 2022
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI'22)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
A Web API for automatic background removal using Deep Learning. App is made using Flask and deployed on Heroku.

Automatic_Background_Remover A Web API for automatic background removal using Deep Learning. App is made using Flask and deployed on Heroku. 👉 https:

Gaurav 16 Oct 29, 2022
The Habitat-Matterport 3D Research Dataset - the largest-ever dataset of 3D indoor spaces.

Habitat-Matterport 3D Dataset (HM3D) The Habitat-Matterport 3D Research Dataset is the largest-ever dataset of 3D indoor spaces. It consists of 1,000

Meta Research 62 Dec 27, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
The repository for freeCodeCamp's YouTube course, Algorithmic Trading in Python

Algorithmic Trading in Python This repository Course Outline Section 1: Algorithmic Trading Fundamentals What is Algorithmic Trading? The Differences

Nick McCullum 1.8k Jan 02, 2023