A pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

Overview

Compact Bilinear Pooling for PyTorch.

This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

This version relies on the FFT implementation provided with PyTorch 0.4.0 onward. For older versions of PyTorch, use the tag v0.3.0.

Installation

Run the setup.py, for instance:

python setup.py install

Usage

class compact_bilinear_pooling.CompactBilinearPooling(input1_size, input2_size, output_size, h1 = None, s1 = None, h2 = None, s2 = None)

Basic usage:

from compact_bilinear_pooling import CountSketch, CompactBilinearPooling

input_size = 2048
output_size = 16000
mcb = CompactBilinearPooling(input_size, input_size, output_size).cuda()
x = torch.rand(4,input_size).cuda()
y = torch.rand(4,input_size).cuda()

z = mcb(x,y)

Test

A couple of test of the implementation of Compact Bilinear Pooling and its gradient can be run using:

python test.py

References

Comments
  • The value in ComplexMultiply_backward function

    The value in ComplexMultiply_backward function

    Hi @gdlg, thanks for this nice work. I'm confused about the backward procedure of complex multiplication. So I hope you can help me to figure it out.

    In forward,

    Z = XY = (Rx + i * Ix)(Ry + i * Iy) = (RxRy - IxIy) + i * (IxRy + RxIy) = Rz + i * Iz
    

    In backward, according the chain rule, it will has

    grad_(L/X) = grad_(L/Z) * grad(Z/X)
               = grad_Z * Y
               = (R_gz + i * I_gz)(Ry + i * Iy)
               = (R_gzRy - I_gzIy) + i * (I_gzRy + R_gzIy)
    

    So, why is this line implemented by using the value = 1 for real part and value = -1 for image part?

    Is there something wrong in my thoughts? Thanks.

    opened by KaiyuYue 8
  • The miss of Rfft

    The miss of Rfft

    When I run the test module, it indicates that the module of pytorch_fft of fft in autograd does not have attribute of Rfft. What version of pytorch_fft should I install to fit this code?

    opened by PeiqinZhuang 8
  • Save the model - TypeError: can't pickle Rfft objects

    Save the model - TypeError: can't pickle Rfft objects

    How do you save and load the model, I'm using torch.save, which cause the following error:

    File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in save
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 117, in _with_file_like
       return body(f)
     File "xanaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in <lambda>
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 198, in _save
       pickler.dump(obj)
    TypeError: can't pickle Rfft objects
    
    
    opened by idansc 3
  • Multi GPU support

    Multi GPU support

    I modify

    class CompactBilinearPooling(nn.Module):   
         def forward(self, x, y):    
                return CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)
    

    to

    def forward(self, x):    
        x = x.permute(0, 2, 3, 1) #NCHW to NHWC   
        y = Variable(x.data.clone())    
        out = (CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)).permute(0,3,1,2) #to NCHW    
        out = nn.functional.adaptive_avg_pool2d(out, 1) # N,C,1,1   
        #add an element-wise signed square root layer and an instance-wise l2 normalization    
        out = (torch.sqrt(nn.functional.relu(out)) - torch.sqrt(nn.functional.relu(-out)))/torch.norm(out,2,1,True)   
        return out 
    

    This makes the compact pooling layer can be plugged to PyTorch CNNs more easily:

    model.avgpool = CompactBilinearPooling(input_C, input_C, bilinear['dim'])
    model.fc = nn.Linear(int(model.fc.in_features/input_C*bilinear['dim']), num_classes)

    However, when I run this using multiple GPUs, I got the following error:

    Traceback (most recent call last): File "train3_bilinear_pooling.py", line 400, in run() File "train3_bilinear_pooling.py", line 219, in run train(train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 326, in train return _each_epoch('train', train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 270, in _each_epoch output = model(input_var) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 319, in call result = self.forward(*input, **kwargs) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 67, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 72, in replicate return replicate(module, device_ids) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 19, in replicate buffer_copies = comm.broadcast_coalesced(buffers, devices) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/cuda/comm.py", line 55, in broadcast_coalesced for chunk in _take_tensors(tensors, buffer_size): File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/_utils.py", line 232, in _take_tensors if tensor.is_sparse: File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/autograd/variable.py", line 68, in getattr return object.getattribute(self, name) AttributeError: 'Variable' object has no attribute 'is_sparse'

    Do you have any ideas?

    opened by YanWang2014 3
  • AssertionError: False is not true

    AssertionError: False is not true

    Hi, I am back again. When running the test.py, I got the following error File "test.py", line 69, in test_gradients self.assertTrue(torch.autograd.gradcheck(cbp, (x,y), eps=1)) AssertionError: False is not true

    What does this mean?

    opened by YanWang2014 2
  • Support for Pytorch 1.11?

    Support for Pytorch 1.11?

    Hi, torch.fft() and torch.irfft() are no more functions, those are modules. And there appears to be a lof of modification in the parameters. I am currently trying to combine the two types of features with compact bilinear pooling, do you know how to port this code to pytorch 1.11?

    opened by bhosalems 1
  • Training does not converge after joining compact bilinear layer

    Training does not converge after joining compact bilinear layer

    Source code: x = self.features(x) #[4,512,28,28] batch_size = x.size(0) x = (torch.bmm(x, torch.transpose(x, 1, 2)) / 28 ** 2).view(batch_size, -1) x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x my code: x = self.features(x) #[4,512,28,28] x = x.view(x.shape[0], x.shape[1], -1) #[4,512,784] x = x.permute(0, 2, 1) #[4,784,512] x = self.mcb(x,x) #[4,784,512] batch_size = x.size(0) x = x.sum(1) #对于二维来说,dim=0,对列求和;dim=1对行求和;在这里是三维所以是对列求和 x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x

    The training does not converge after modification. Why? Is it a problem with my code?

    opened by roseif 3
Releases(v0.4.0)
Owner
Grégoire Payen de La Garanderie
Grégoire Payen de La Garanderie
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 04, 2023
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch

lookahead optimizer for pytorch PyTorch implement of Lookahead Optimizer: k steps forward, 1 step back Usage: base_opt = torch.optim.Adam(model.parame

Liam 318 Dec 09, 2022
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
Use Jax functions in Pytorch with DLPack

Use Jax functions in Pytorch with DLPack

Phil Wang 106 Dec 17, 2022
Riemannian Adaptive Optimization Methods with pytorch optim

geoopt Manifold aware pytorch.optim. Unofficial implementation for “Riemannian Adaptive Optimization Methods” ICLR2019 and more. Installation Make sur

642 Jan 03, 2023
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

Remi 8.7k Dec 31, 2022
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Phil Wang 900 Dec 22, 2022
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

Matthias Fey 1.2k Jan 07, 2023
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

Kim Seonghyeon 433 Dec 27, 2022
A PyTorch implementation of L-BFGS.

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook) What is it?

Hao-Jun Michael Shi 478 Dec 27, 2022
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute

Lambda Networks - Pytorch Implementation of λ Networks, a new approach to image recognition that reaches SOTA on ImageNet. The new method utilizes λ l

Phil Wang 1.5k Jan 07, 2023
High-level batteries-included neural network training library for Pytorch

Pywick High-Level Training framework for Pytorch Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with st

382 Dec 06, 2022