Oriented Response Networks, in CVPR 2017

Overview

Oriented Response Networks

[Home] [Project] [Paper] [Supp] [Poster]

illustration

Torch Implementation

The torch branch contains:

  • the official torch implementation of ORN.
  • the MNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • Torch7

Getting started

You can setup everything via a single command wget -O - https://git.io/vHCMI | bash or do it manually in case something goes wrong:

  1. install the dependencies (required by the demo code):

  2. clone the torch branch:

    # git version must be greater than 1.9.10
    git clone https://github.com/ZhouYanzhao/ORN.git -b torch --single-branch ORN.torch
    cd ORN.torch
    export DIR=$(pwd)
  3. install ORN:

    cd $DIR/install
    # install the CPU/GPU/CuDNN version ORN.
    bash install.sh
  4. unzip the MNIST dataset:

    cd $DIR/demo/datasets
    unzip MNIST
  5. run the MNIST-Variants demo:

    cd $DIR/demo
    # you can modify the script to test different hyper-parameters
    bash ./scripts/Train_MNIST.sh

Trouble shooting

If you run into 'cudnn.find' not found, update Torch7 to the latest version via cd <TORCH_DIR> && bash ./update.sh then re-install everything.

More experiments

CIFAR 10/100

You can train the OR-WideResNet model (converted from WideResNet by simply replacing Conv layers with ORConv layers) on CIFAR dataset with WRN.

dataset=cifar10_original.t7 model=or-wrn widen_factor=4 depth=40 ./scripts/train_cifar.sh

With exactly the same settings, ORN-augmented WideResNet achieves state-of-the-art result while using significantly fewer parameters.

CIFAR

Network Params CIFAR-10 (ZCA) CIFAR-10 (mean/std) CIFAR-100 (ZCA) CIFAR-100 (mean/std)
DenseNet-100-12-dropout 7.0M - 4.10 - 20.20
DenseNet-190-40-dropout 25.6M - 3.46 - 17.18
WRN-40-4 8.9M 4.97 4.53 22.89 21.18
WRN-28-10-dropout 36.5M 4.17 3.89 20.50 18.85
WRN-40-10-dropout 55.8M - 3.80 - 18.3
ORN-40-4(1/2) 4.5M 4.13 3.43 21.24 18.82
ORN-28-10(1/2)-dropout 18.2M 3.52 2.98 19.22 16.15

Table.1 Test error (%) on CIFAR10/100 dataset with flip/translation augmentation)

ImageNet

ILSVRC2012

The effectiveness of ORN is further verified on large scale data. The OR-ResNet-18 model upgraded from ResNet-18 yields significant better performance when using similar parameters.

Network Params Top1-Error Top5-Error
ResNet-18 11.7M 30.614 10.98
OR-ResNet-18 11.4M 28.916 9.88

Table.2 Validation error (%) on ILSVRC-2012 dataset.

You can use facebook.resnet.torch to train the OR-ResNet-18 model from scratch or finetune it on your data by using the pre-trained weights.

-- To fill the model with the pre-trained weights:
model = require('or-resnet.lua')({tensorType='torch.CudaTensor', pretrained='or-resnet18_weights.t7'})

A more specific demo notebook of using the pre-trained OR-ResNet to classify images can be found here.

PyTorch Implementation

The pytorch branch contains:

  • the official pytorch implementation of ORN (alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
  • the MNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • PyTorch

Getting started

  1. install the dependencies (required by the demo code):

    • tqdm: pip install tqdm
    • pillow: pip install Pillow
  2. clone the pytorch branch:

    # git version must be greater than 1.9.10
    git clone https://github.com/ZhouYanzhao/ORN.git -b pytorch --single-branch ORN.pytorch
    cd ORN.pytorch
    export DIR=$(pwd)
  3. install ORN:

    cd $DIR/install
    bash install.sh
  4. run the MNIST-Variants demo:

    cd $DIR/demo
    # train ORN on MNIST-rot
    python main.py --use-arf
    # train baseline CNN
    python main.py

Caffe Implementation

The caffe branch contains:

  • the official caffe implementation of ORN (alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
  • the MNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • Caffe

Getting started

  1. install the dependency (required by the demo code):

  2. clone the caffe branch:

    # git version must be greater than 1.9.10
    git clone https://github.com/ZhouYanzhao/ORN.git -b caffe --single-branch ORN.caffe
    cd ORN.caffe
    export DIR=$(pwd)
  3. install ORN:

    # modify Makefile.config first
    # compile ORN.caffe
    make clean && make -j"$(nproc)" all
  4. run the MNIST-Variants demo:

    cd $DIR/examples/mnist
    bash get_mnist.sh
    # train ORN & CNN on MNIST-rot
    bash train.sh

Note

Due to implementation differences,

  • upgrading Conv layers to ORConv layers can be done by adding an orn_param
  • num_output of ORConv layers should be multipied by nOrientation of ARFs

Example:

layer {
  type: "Convolution"
  name: "ORConv" bottom: "Data" top: "ORConv"
  # add this line to replace regular filters with ARFs
  orn_param {orientations: 8}
  param { lr_mult: 1 decay_mult: 2}
  convolution_param {
    # this means 10 ARF feature maps
    num_output: 80
    kernel_size: 3
    stride: 1
    pad: 0
    weight_filler { type: "msra"}
    bias_filler { type: "constant" value: 0}
  }
}

Check the MNIST demo prototxt (and its visualization) for more details.

Citation

If you use the code in your research, please cite:

@INPROCEEDINGS{Zhou2017ORN,
    author = {Zhou, Yanzhao and Ye, Qixiang and Qiu, Qiang and Jiao, Jianbin},
    title = {Oriented Response Networks},
    booktitle = {CVPR},
    year = {2017}
}
Viperdb - A tiny log-structured key-value database written in pure Python

ViperDB 🐍 ViperDB is a lightweight embedded key-value store written in pure Pyt

17 Oct 17, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
《Improving Unsupervised Image Clustering With Robust Learning》(2020)

Improving Unsupervised Image Clustering With Robust Learning This repo is the PyTorch codes for "Improving Unsupervised Image Clustering With Robust L

Sungwon Park 129 Dec 27, 2022
Hcaptcha-challenger - Gracefully face hCaptcha challenge with Yolov5(ONNX) embedded solution

hCaptcha Challenger 🚀 Gracefully face hCaptcha challenge with Yolov5(ONNX) embe

593 Jan 03, 2023
This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation.

ERFNet This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation. NEW!! New PyTorch

Edu 104 Jan 05, 2023
A PaddlePaddle implementation of Time Interval Aware Self-Attentive Sequential Recommendation.

TiSASRec.paddle A PaddlePaddle implementation of Time Interval Aware Self-Attentive Sequential Recommendation. Introduction 论文:Time Interval Aware Sel

Paddorch 2 Nov 28, 2021
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
Tf alloc - Simplication of GPU allocation for Tensorflow2

tf_alloc Simpliying GPU allocation for Tensorflow Developer: korkite (Junseo Ko)

Junseo Ko 3 Feb 10, 2022
A PyTorch-based library for semi-supervised learning

News If you want to join TorchSSL team, please e-mail Yidong Wang ([email protected]<

1k Jan 06, 2023
[CVPR 2022] Unsupervised Image-to-Image Translation with Generative Prior

GP-UNIT - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Unsupervised Image-to-

Shuai Yang 125 Jan 03, 2023
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
Multimodal Temporal Context Network (MTCN)

Multimodal Temporal Context Network (MTCN) This repository implements the model proposed in the paper: Evangelos Kazakos, Jaesung Huh, Arsha Nagrani,

Evangelos Kazakos 13 Nov 24, 2022
Bi-level feature alignment for versatile image translation and manipulation (Under submission of TPAMI)

Bi-level feature alignment for versatile image translation and manipulation (Under submission of TPAMI) Preparation Clone the Synchronized-BatchNorm-P

Fangneng Zhan 12 Aug 10, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 08, 2023
Code for Paper Predicting Osteoarthritis Progression via Unsupervised Adversarial Representation Learning

Predicting Osteoarthritis Progression via Unsupervised Adversarial Representation Learning (c) Tianyu Han and Daniel Truhn, RWTH Aachen University, 20

Tianyu Han 7 Nov 22, 2022
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.

ALiBi PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Quickstart Clone this reposit

Jake Tae 4 Jul 27, 2022
DABO: Data Augmentation with Bilevel Optimization

DABO: Data Augmentation with Bilevel Optimization [Paper] The goal is to automatically learn an efficient data augmentation regime for image classific

ElementAI 24 Aug 12, 2022
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah

Hanxun Huang 98 Dec 07, 2022
This implementation contains the application of GPlearn's symbolic transformer on a commodity futures sector of the financial market.

GPlearn_finiance_stock_futures_extension This implementation contains the application of GPlearn's symbolic transformer on a commodity futures sector

Chengwei <a href=[email protected]"> 189 Dec 25, 2022