exponential adaptive pooling for PyTorch

Related tags

Deep LearningadaPool
Overview

AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling

supported versions Library GitHub license


Abstract

Pooling layers are essential building blocks of Convolutional Neural Networks (CNNs) that reduce computational overhead and increase the receptive fields of proceeding convolutional operations. They aim to produce downsampled volumes that closely resemble the input volume while, ideally, also being computationally and memory efficient. It is a challenge to meet both requirements jointly. To this end, we propose an adaptive and exponentially weighted pooling method named adaPool. Our proposed method uses a parameterized fusion of two sets of pooling kernels that are based on the exponent of the Dice-Sørensen coefficient and the exponential maximum, respectively. A key property of adaPool is its bidirectional nature. In contrast to common pooling methods, weights can be used to upsample a downsampled activation map. We term this method adaUnPool. We demonstrate how adaPool improves the preservation of detail through a range of tasks including image and video classification and object detection. We then evaluate adaUnPool on image and video frame super-resolution and frame interpolation tasks. For benchmarking, we introduce Inter4K, a novel high-quality, high frame-rate video dataset. Our combined experiments demonstrate that adaPool systematically achieves better results across tasks and backbone architectures, while introducing a minor additional computational and memory overhead.


[arXiv preprint -- coming soon]

Original
adaPool

Dependencies

All parts of the code assume that torch is of version 1.4 or higher. There might be instability issues on previous versions.

This work relies on the previous repo for exponential maximum pooling (alexandrosstergiou/SoftPool). Before opening an issue please do have a look at that repository as common problems in running or installation have been addressed.

! Disclaimer: This repository is heavily structurally influenced on Ziteng Gao's LIP repo https://github.com/sebgao/LIP

Installation

You can build the repo through the following commands:

$ git clone https://github.com/alexandrosstergiou/adaPool.git
$ cd adaPool-master/pytorch
$ make install
--- (optional) ---
$ make test

Usage

You can load any of the 1D, 2D or 3D variants after the installation with:

# Ensure that you import `torch` first!
import torch
import adapool_cuda

# For function calls
from adaPool import adapool1d, adapool2d, adapool3d, adaunpool
from adaPool import edscwpool1d, edscwpool2d, edscwpool3d
from adaPool import empool1d, empool2d, empool3d
from adaPool import idwpool1d, idwpool2d, idwpool3d

# For class calls
from adaPool import AdaPool1d, AdaPool2d, AdaPool3d
from adaPool import EDSCWPool1d, EDSCWPool2d, EDSCWPool3d
from adaPool import EMPool1d, EMPool2d, EMPool3d
from adaPool import IDWPool1d, IDWPool2d, IDWPool3d
  • (ada/edscw/em/idw)pool<x>d: Are functional interfaces for each of the respective pooling methods.
  • (Ada/Edscw/Em/Idw)Pool<x>d: Are the class version to create objects that can be referenced in the code.

Citation

@article{stergiou2021adapool,
  title={AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling},
  author={Stergiou, Alexandros and Poppe, Ronald},
  journal={arXiv preprint},
  year={2021}}

Licence

MIT

You might also like...
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.

A ready-to-use framework of latest models for structured (tabular) data learning with PyTorch. Applications include recommendation, CRT prediction, healthcare analytics, and etc.

PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs

Convolutional Networks with Adaptive Inference Graphs (ConvNet-AIG) This repository contains a PyTorch implementation of the paper Convolutional Netwo

Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

[CVPR 2021] Official PyTorch Implementation for
[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring Checkout for the demo (GUI/Google Colab)! The GUI version might occasional

an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch
An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'
Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'

pytorch-AdaIN This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Hua

This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

Comments
  • Installation issue on Google Colab

    Installation issue on Google Colab

    Hi, Thanks for providing a Cuda optimized implementation. While building the lib I encountered an issue with "inf" at limits.cuh.

    CUDA/limits.cuh(119): error: identifier "inf" is undefined
    
    CUDA/limits.cuh(120): error: identifier "inf" is undefined
    
    CUDA/limits.cuh(128): error: identifier "inf" is undefined
    
    CUDA/limits.cuh(129): error: identifier "inf" is undefined
    
    4 errors detected in the compilation of "CUDA/adapool_cuda_kernel.cu".
    error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
    Makefile:2: recipe for target 'install' failed
    make: *** [install] Error 1
    

    The following notebook provides more details with environment informations: https://colab.research.google.com/drive/1T6Nxe2qbjKxXzo2IimFMYBn52qbthlZB?usp=sharing

    opened by okbalefthanded 2
  • Solution: Unresolved extern function '_Z3powdi'”

    Solution: Unresolved extern function '_Z3powdi'”

    cuda11. 0

    When I tried to build your project on win10, I encountered the following problems: “ptxas fatal : Unresolved extern function '_Z3powdi'”

    Reason: Wrong use of pow function in Cu code Solution: for example, pow (x, 2) can be changed to X * X

    opened by Culturenotes 1
  • Does AdaPool2d's beta require fixed image size?

    Does AdaPool2d's beta require fixed image size?

    I'm currently running AdaPool2d as a replacement of MaxPool2d in Resnet's stem similar on how you did it in SoftPool. However, I keep on getting an assertionError in line 1325 as shown below:

    assert isinstance(beta, tuple) or torch.is_tensor(beta), 'Agument `beta` can only be initialized with Tuple or Tensor type objects and should correspond to size (oH, oW)'
    

    Does this mean beta requires a fixed image size, e.g. (224,244)? Or is there a way to make it adaptive across varying image size (e.g. object detection)?

    opened by johnanthonyjose 1
  • The version of pytorch and how to deal with `nan_to_num` function in lower versions

    The version of pytorch and how to deal with `nan_to_num` function in lower versions

    Thank you for this amazing project. I saw it from SoftPool. After installing it, make test, but I got AttributeError: module 'torch' has no attribute 'nan_to_num', after I checked, this function used in idea.py was introduced in Pytorch 1.8.0, so the torch version in the README may need to be updated, or is there an easy way to be compatible with lower versions?

    opened by MaxChanger 1
Releases(v0.2)
Owner
Alexandros Stergiou
Computer Vision and Machine Learning Researcher
Alexandros Stergiou
TensorFlow implementation for Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How

Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How TensorFlow implementation for Bayesian Modeling and Unce

Shen Lab at Texas A&M University 8 Sep 02, 2022
Weakly-supervised semantic image segmentation with CNNs using point supervision

Code for our ECCV paper What's the Point: Semantic Segmentation with Point Supervision. Summary This library is a custom build of Caffe for semantic i

27 Sep 14, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
🔊 Audio and fastai v2

Fastaudio An audio module for fastai v2. We want to help you build audio machine learning applications while minimizing the need for audio domain expe

152 Dec 28, 2022
Causal estimators for use with WhyNot

WhyNot Estimators A collection of causal inference estimators implemented in Python and R to pair with the Python causal inference library whynot. For

ZYKLS 8 Apr 06, 2022
Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time.

BBB Face Recognizer Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time. Instalati

Rafael Azevedo 232 Dec 24, 2022
Minimalistic PyTorch training loop

Backbone for PyTorch training loop Will try to keep it minimalistic. pip install back from back import Bone Features Progress bar Checkpoints saving/l

Kashin 4 Jan 16, 2020
Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL)

Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL) This repository contains all source code used to generate the results in the article "

Charlotte Loh 3 Jul 23, 2022
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022
Tensorflow port of a full NetVLAD network

netvlad_tf The main intention of this repo is deployment of a full NetVLAD network, which was originally implemented in Matlab, in Python. We provide

Robotics and Perception Group 225 Nov 08, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 01, 2023
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

631 Jan 04, 2023
Benchmarks for Model-Based Optimization

Design-Bench Design-Bench is a benchmarking framework for solving automatic design problems that involve choosing an input that maximizes a black-box

Brandon Trabucco 43 Dec 20, 2022
Official implementation of CATs: Cost Aggregation Transformers for Visual Correspondence NeurIPS'21

CATs: Cost Aggregation Transformers for Visual Correspondence NeurIPS'21 For more information, check out the paper on [arXiv]. Training with different

Sunghwan Hong 120 Jan 04, 2023
Code for ICLR2018 paper: Improving GAN Training via Binarized Representation Entropy (BRE) Regularization - Y. Cao · W Ding · Y.C. Lui · R. Huang

code for "Improving GAN Training via Binarized Representation Entropy (BRE) Regularization" (ICLR2018 paper) paper: https://arxiv.org/abs/1805.03644 G

21 Oct 12, 2020
Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022) By Shilong Zhang*, Zhuoran Yu*, Liyang Liu*, Xinjiang Wang, Aojun Zhou,

Shilong Zhang 129 Dec 24, 2022
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dea

MIC-DKFZ 1.2k Jan 04, 2023
A package related to building quasi-fibration symmetries

qf A package related to building quasi-fibration symmetries. If you'd like to learn more about how it works, see the brief explanation and References

Paolo Boldi 1 Dec 01, 2021
A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.

Use this instead: https://github.com/facebookresearch/maskrcnn-benchmark A Pytorch Implementation of Detectron Example output of e2e_mask_rcnn-R-101-F

Roy 2.8k Dec 29, 2022