A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

Overview

A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

This repository contains a PyTorch implementation for the paper: Deep Pyramidal Residual Networks (CVPR 2017, Dongyoon Han*, Jiwhan Kim*, and Junmo Kim, (equally contributed by the authors*)). The code in this repository is based on the example provided in PyTorch examples and the nice implementation of Densely Connected Convolutional Networks.

Two other implementations with LuaTorch and Caffe are provided:

  1. A LuaTorch implementation for PyramidNets,
  2. A Caffe implementation for PyramidNets.

Usage examples

To train additive PyramidNet-200 (alpha=300 with bottleneck) on ImageNet-1k dataset with 8 GPUs:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --data ~/dataset/ILSVRC/Data/CLS-LOC/ --net_type pyramidnet --lr 0.05 --batch_size 128 --depth 200 -j 16 --alpha 300 --print-freq 1 --expname PyramidNet-200 --dataset imagenet --epochs 100

To train additive PyramidNet-110 (alpha=48 without bottleneck) on CIFAR-10 dataset with a single-GPU:

CUDA_VISIBLE_DEVICES=0 python train.py --net_type pyramidnet --alpha 64 --depth 110 --no-bottleneck --batch_size 32 --lr 0.025 --print-freq 1 --expname PyramidNet-110 --dataset cifar10 --epochs 300

To train additive PyramidNet-164 (alpha=48 with bottleneck) on CIFAR-100 dataset with 4 GPUs:

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --net_type pyramidnet --alpha 48 --depth 164 --batch_size 128 --lr 0.5 --print-freq 1 --expname PyramidNet-164 --dataset cifar100 --epochs 300

Notes

  1. This implementation contains the training (+test) code for add-PyramidNet architecture on ImageNet-1k dataset, CIFAR-10 and CIFAR-100 datasets.
  2. The traditional data augmentation for ImageNet and CIFAR datasets are used by following fb.resnet.torch.
  3. The example codes for ResNet and Pre-ResNet are also included.
  4. For efficient training on ImageNet-1k dataset, Intel MKL and NVIDIA(nccl) are prerequistes. Please check the official PyTorch github for the installation.

Tracking training progress with TensorBoard

Thanks to the implementation, which support the TensorBoard to track training progress efficiently, all the experiments can be tracked with tensorboard_logger.

Tensorboard_logger can be installed with

pip install tensorboard_logger

Paper Preview

Abstract

Deep convolutional neural networks (DCNNs) have shown remarkable performance in image classification tasks in recent years. Generally, deep neural network architectures are stacks consisting of a large number of convolution layers, and they perform downsampling along the spatial dimension via pooling to reduce memory usage. At the same time, the feature map dimension (i.e., the number of channels) is sharply increased at downsampling locations, which is essential to ensure effective performance because it increases the capability of high-level attributes. Moreover, this also applies to residual networks and is very closely related to their performance. In this research, instead of using downsampling to achieve a sharp increase at each residual unit, we gradually increase the feature map dimension at all the units to involve as many locations as possible. This is discussed in depth together with our new insights as it has proven to be an effective design to improve the generalization ability. Furthermore, we propose a novel residual unit capable of further improving the classification accuracy with our new network architecture. Experiments on benchmark CIFAR datasets have shown that our network architecture has a superior generalization ability compared to the original residual networks.

Schematic Illustration

We provide a simple schematic illustration to compare the several network architectures, which have (a) basic residual units, (b) bottleneck, (c) wide residual units, and (d) our pyramidal residual units, and (e) our pyramidal bottleneck residual units, as follows:

image

Experimental Results

  1. The results are readily reproduced, which show the same performances as those reproduced with A LuaTorch implementation for PyramidNets.

  2. Comparison of the state-of-the-art networks by [Top-1 Test Error Rates VS # of Parameters]:

image

  1. Top-1 test error rates (%) on CIFAR datasets are shown in the following table. All the results of PyramidNets are produced with additive PyramidNets, and α denotes alpha (the widening factor). “Output Feat. Dim.” denotes the feature dimension of just before the last softmax classifier.

image

ImageNet-1k Pretrained Models

  • A pretrained model of PyramidNet-101-360 is trained from scratch using the code in this repository (single-crop (224x224) validation error rates are reported):
Network Type Alpha # of Params Top-1 err(%) Top-5 err(%) Model File
ResNet-101 (Caffe model) - 44.7M 23.6 7.1 Original Model
ResNet-101 (Luatorch model) - 44.7M 22.44 6.21 Original Model
PyramidNet-v1-101 360 42.5M 21.98 6.20 Download
  • Note that the above widely-used ResNet-101 (Caffe model) is trained with the images, where the pixel intensities are in [0,255] and are centered by the mean image, our PyramidNet-101 is trained with the images where the pixel values are standardized.
  • The model is originally trained with PyTorch-0.4, and the keys of num_batches_tracked were excluded for convenience (the BatchNorm2d layer in PyTorch (>=0.4) contains the key of num_batches_tracked by track_running_stats).

Updates

  1. Some minor bugs are fixed (2018/02/22).
  2. train.py is updated (including ImagNet-1k training code) (2018/04/06).
  3. resnet.py and PyramidNet.py are updated (2018/04/06).
  4. preresnet.py (Pre-ResNet architecture) is uploaded (2018/04/06).
  5. A pretrained model using PyTorch is uploaded (2018/07/09).

Citation

Please cite our paper if PyramidNets are used:

@article{DPRN,
  title={Deep Pyramidal Residual Networks},
  author={Han, Dongyoon and Kim, Jiwhan and Kim, Junmo},
  journal={IEEE CVPR},
  year={2017}
}

If this implementation is useful, please cite or acknowledge this repository on your work.

Contact

Dongyoon Han ([email protected]), Jiwhan Kim ([email protected]), Junmo Kim ([email protected])

Owner
Greg Dongyoon Han
Greg Dongyoon Han
Code, Models and Datasets for OpenViDial Dataset

OpenViDial This repo contains downloading instructions for the OpenViDial dataset in 《OpenViDial: A Large-Scale, Open-Domain Dialogue Dataset with Vis

119 Dec 08, 2022
Churn prediction

Churn-prediction Churn-prediction Data preprocessing:: Label encoder is used to normalize the categorical variable Data Transformation:: For each data

1 Sep 28, 2022
Dilated Convolution for Semantic Image Segmentation

Multi-Scale Context Aggregation by Dilated Convolutions Introduction Properties of dilated convolution are discussed in our ICLR 2016 conference paper

Fisher Yu 764 Dec 26, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
A PyTorch Library for Accelerating 3D Deep Learning Research

Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research Overview NVIDIA Kaolin library provides a PyTorch API for working with a variety

NVIDIA GameWorks 3.5k Jan 07, 2023
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

77 Dec 24, 2022
Turi Create simplifies the development of custom machine learning models.

Quick Links: Installation | Documentation | WWDC 2019 | WWDC 2018 Turi Create Check out our talks at WWDC 2019 and at WWDC 2018! Turi Create simplifie

Apple 10.9k Jan 01, 2023
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 02, 2023
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

49 Dec 19, 2022
Short and long time series classification using convolutional neural networks

time-series-classification Short and long time series classification via convolutional neural networks In this project, we present a novel framework f

35 Oct 22, 2022
Official PyTorch Implementation of Hypercorrelation Squeeze for Few-Shot Segmentation, arXiv 2021

Hypercorrelation Squeeze for Few-Shot Segmentation This is the implementation of the paper "Hypercorrelation Squeeze for Few-Shot Segmentation" by Juh

Juhong Min 165 Dec 28, 2022
Instance-wise Occlusion and Depth Orders in Natural Scenes (CVPR 2022)

Instance-wise Occlusion and Depth Orders in Natural Scenes Official source code. Appears at CVPR 2022 This repository provides a new dataset, named In

27 Dec 27, 2022
This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

ReceptiveFieldAnalysisToolbox This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures usin

84 Nov 23, 2022
The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

Will Thompson 166 Jan 04, 2023
Official repository of "DeepMIH: Deep Invertible Network for Multiple Image Hiding", TPAMI 2022.

DeepMIH: Deep Invertible Network for Multiple Image Hiding (TPAMI 2022) This repo is the official code for DeepMIH: Deep Invertible Network for Multip

Junpeng Jing 67 Nov 22, 2022
Definition of a business problem according to Wilson Lower Bound Score and Time Based Average Rating

Wilson Lower Bound Score, Time Based Rating Average In this study I tried to calculate the product rating and sorting reviews more accurately. I have

3 Sep 30, 2021
PG2Net: Personalized and Group PreferenceGuided Network for Next Place Prediction

PG2Net PG2Net:Personalized and Group Preference Guided Network for Next Place Prediction Datasets Experiment results on two Foursquare check-in datase

Urban Mobility 5 Dec 20, 2022
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
Audio Visual Emotion Recognition using TDA

Audio Visual Emotion Recognition using TDA RAVDESS database with two datasets analyzed: Video and Audio dataset: Audio-Dataset: https://www.kaggle.com

Combinatorial Image Analysis research group 3 May 11, 2022