Official PyTorch Implementation of Learning Architectures for Binary Networks

Related tags

Deep Learningbnas
Overview

Learning Architectures for Binary Networks

An Pytorch Implementation of the paper Learning Architectures for Binary Networks (BNAS) (ECCV 2020)
If you find any part of our code useful for your research, consider citing our paper.

@inproceedings{kimSC2020BNAS,
  title={Learning Architectures for Binary Networks},
  author={Dahyun Kim and Kunal Pratap Singh and Jonghyun Choi},
  booktitle={ECCV},
  year={2020}
}

Maintainer

Introduction

We present a method for searching architectures of a network with both binary weights and activations. When using the same binarization scheme, our searched architectures outperform binary network whose architectures are well known floating point networks.

Note: our searched architectures still achieve competitive results when compared to the state of the art without additional pretraining, new binarization schemes, or novel training methods.

Prerequisite - Docker Containers

We recommend using the below Docker container as it provides comprehensive running environment. You will need to install nvidia-docker and its related packages following instructions here.

Pull our image uploaded here using the following command.

$ docker pull killawhale/apex:latest

You can then create the container to run our code via the following command.

$ docker run --name [CONTAINER_NAME] --runtime=nvidia -it -v [HOST_FILE_DIR]:[CONTAINER_FILE_DIR] --shm-size 16g killawhale/apex:latest bash
  • [CONTAINER_NAME]: the name of the created container
  • [HOST_FILE_DIR]: the path of the directory on the host machine which you want to sync your container with
  • [CONTAINER_FILE_DIR]: the name in which the synced directory will appear inside the container

Note: For those who do not want to use the docker container, we use PyTorch 1.2, torchvision 0.5, Python 3.6, CUDA 10.0, and Apex 0.1. You can also refer to the provided requirements.txt.

Dataset Preparation

CIFAR10

For CIFAR10, we're using CIFAR10 provided by torchvision. Run the following command to download it.

$ python src/download_cifar10.py

This will create a directory named data and download the dataset in it.

ImageNet

For ImageNet, please follow the instructions below.

  1. download the training set for the ImageNet dataset.
$ wget http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_train.tar

This may take a long time depending on your internet connection.

  1. download the validation set for the ImageNet dataset.
$ wget http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_val.tar
  1. make a directory which will contain the ImageNet dataset and move your downloaded .tar files inside the directory.

  2. extract and organize the training set into different categories.

$ mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
$ tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
$ find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
$ cd ..
  1. do the same for the validation set as well.
$ mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar
$ wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash
$ rm -rf ILSVR2012_img_val.tar
  1. change the synset ids to integer labels.
$ python src/prepare_imgnet.py [PATH_TO_IMAGENET]
  • [PATH_TO_IMAGENET]: the path to the directory which has the ImageNet dataset

You can optionally run the following if you only prepared the validation set.

$ python src/prepare_imgnet.py [PATH_TO_IMAGENET] --val_only

Inference with Pre-Trained Models

To reproduce the results reported in the paper, you can use the pretrained models provided here.

Note: For CIFAR10 we only share BNAS-A, as training other configurations for CIFAR10 does not take much time. For ImageNet, we currently provide all the models (BNAS-D,E,F,G,H).

For running validation on CIFAR10 using our pretrained models, use the following command.

$ CUDA_VISIBLE_DEVICES=0,1; python -W ignore src/test.py --path_to_weights [PATH_TO_WEIGHTS] --arch latest_cell_zeroise --parallel
  • [PATH_TO_WEIGHTS]: the path to the downloaded pretrained weights (for CIFAR10, it's BNAS-A)

For running validation on ImageNet using our pretrained models, use the following command.

$ CUDA_VISIBLE_DEIVCES=0,1; python -m torch.distributed.launch --nproc_per_node=2 src/test_imagenet.py --data [PATH_TO_IMAGENET] --path_to_weights [PATH_TO_WEIGHTS] --model_config [MODEL_CONFIG]
  • [PATH_TO_IMAGENET]: the path to the directory which has the ImageNet dataset
  • [PATH_TO_WEIGHTS]: the path to the downloaded pretrained weights
  • [MODEL_CONFIG]: the model to run the validation with. Can be one of 'bnas-d', 'bnas-e', 'bnas-f', 'bnas-g', or 'bnas-h'

Expected result:

Model Reported Top-1(%) Reported Top-5(%) Reproduced Top-1(%) Reproduced Top-1(%)
BNAS-A 92.70 - ~92.39 -
BNAS-D 57.69 79.89 ~57.60 ~80.00
BNAS-E 58.76 80.61 ~58.15 ~80.16
BNAS-F 58.99 80.85 ~58.89 ~80.87
BNAS-G 59.81 81.61 ~59.39 ~81.03
BNAS-H 63.51 83.91 ~63.70 ~84.49

You can click on the links at the model name to download the corresponding model weights.

Note: the provided pretrained weights were trained with Apex along with different batch size than the ones reported in the paper (due to computation resource constraints) and hence the inference result may vary slightly from the reported results in the paper.

More comparison with state of the art binary networks are in the paper.

Searching Architectures

To search architectures, use the following command.

$ CUDA_VISIBLE_DEVICES=0; python -W ignore src/search.py --save [ARCH_NAME]
  • [ARCH_NAME]: the name of the searched architecture

This will automatically append the searched architecture in the genotypes.py file. Note that two genotypes will be appended, one for CIFAR10 and one for ImageNet. The validation accuracy at the end of the search is not indicative of the final performance of the searched architecture. To obtain the final performance, one must train the final architecture from scratch as described next.

BNAS-Normal Cell BNAS-Reduction Cell

Figure: One Example of Normal(left) and Reduction(right) cells searched by BNAS

Training Searched Architectures from scratch

To train our best searched cell on CIFAR10, use the following command.

$ CUDA_VISIBLE_DEVICES=0,1 python -W ignore src/train.py --learning_rate 0.05 --save [SAVE_NAME] --arch latest_cell_zeroise --parallel 
  • [SAVE_NAME]: experiment name

You will be able to see the validation accuracy at every epoch as shown below and there is no need to run a separate inference code.

2019-12-29 11:47:42,166 args = Namespace(arch='latest_cell_zeroise', batch_size=256, data='../data', drop_path_prob=0.2, epochs=600, init_channels=36, layers=20, learning_rate=0.05, model_path='saved_models', momentum=0.9, num_skip=1, parallel=True, report_freq=50, save='eval-latest_cell_zeroise_train_repro_0.05', seed=0, weight_decay=3e-06)
2019-12-29 11:47:46,893 param size = 5.578252MB
2019-12-29 11:47:48,654 epoch 0 lr 2.500000e-03
2019-12-29 11:47:55,462 train 000 2.623852e+00 9.375000 57.812500
2019-12-29 11:48:34,952 train 050 2.103856e+00 22.533701 74.180453
2019-12-29 11:49:14,118 train 100 1.943232e+00 27.440439 80.186417
2019-12-29 11:49:53,748 train 150 1.867823e+00 30.114342 82.512417
2019-12-29 11:50:29,680 train_acc 32.170000
2019-12-29 11:50:30,057 valid 000 1.792161e+00 30.859375 88.671875
2019-12-29 11:50:34,032 valid_acc 38.350000
2019-12-29 11:50:34,101 epoch 1 lr 2.675926e-03
2019-12-29 11:50:35,476 train 000 1.551331e+00 40.234375 92.187500
2019-12-29 11:51:15,773 train 050 1.572010e+00 42.256434 90.502451
2019-12-29 11:51:55,991 train 100 1.539024e+00 43.181467 90.976949
2019-12-29 11:52:36,345 train 150 1.515295e+00 44.264797 91.395902
2019-12-29 11:53:12,128 train_acc 45.016000
2019-12-29 11:53:12,487 valid 000 1.419507e+00 46.484375 93.359375
2019-12-29 11:53:16,366 valid_acc 48.640000

You should expect around 92% validation accuracy with our best searched cell once the training is finished at 600 epochs. To train custom architectures, give custom architectures to the --arch flag after adding it in the genotypes.py file. Note that you can also change the number of cells stacked and number of initial channels of the model by giving arguments to the --layers option and --init_channels option respectively.

With different architectures, the final network will have different computational costs and the default hyperparameters may not be optimal (such as the learning rate scheduling). Thus, you should expect the final accuracy to vary by around 1~1.5% on the validation accuracy on CIFAR10.

To train our best searched cell on ImageNet, use the following command.

$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 src/train_imagenet.py --data [PATH_TO_IMAGENET] --arch latest_cell --model_config [MODEL_CONFIG] --save [SAVE_NAME]
  • [PATH_TO_IMAGENET]: the path to the directory which has the ImageNet dataset
  • [MODEL_CONFIG]: the model to train. Can be one of 'bnas-d', 'bnas-e', 'bnas-f', 'bnas-g', or 'bnas-h'
  • [SAVE_NAME]: experiment name

You will be able to see the validation accuracy at every epoch as shown below and there is no need to run a separate inference code.

2020-03-25 09:53:44,578 args = Namespace(arch='latest_cell', batch_size=256, class_size=1000, data='../../darts-norm/cnn/Imagenet/', distributed=True, drop_path_prob=0, epochs=250, gpu=0, grad_clip=5.0, init_channels=68, keep_batchnorm_fp32=None, label_smooth=0.1, layers=15, learning_rate=0.05, local_rank=0, loss_scale=None, momentum=0.9, num_skip=3, opt_level='O0', report_freq=100, resume=None, save='eval-bnas_f_retrain', seed=0, start_epoch=0, weight_decay=3e-05, world_size=4)
2020-03-25 09:53:50,926 no of parameters 39781442.000000
2020-03-25 09:53:56,444 epoch 0
2020-03-25 09:54:06,220 train 000 7.844889e+00 0.000000 0.097656
2020-03-25 10:00:01,717 train 100 7.059666e+00 0.315207 1.382658
2020-03-25 10:06:09,138 train 200 6.909059e+00 0.498484 2.070215
2020-03-25 10:12:21,804 train 300 6.750810e+00 0.838027 3.157119
2020-03-25 10:18:30,815 train 400 6.627304e+00 1.203534 4.369691
2020-03-25 10:24:37,901 train 500 6.526508e+00 1.601679 5.519625
2020-03-25 10:30:44,522 train 600 6.439230e+00 2.016983 6.666298
2020-03-25 10:36:50,776 train 700 6.360960e+00 2.424132 7.778648
2020-03-25 10:42:58,087 train 800 6.291507e+00 2.830446 8.824784
2020-03-25 10:49:04,204 train 900 6.228209e+00 3.251162 9.829681
2020-03-25 10:55:12,315 train 1000 6.167705e+00 3.673670 10.844819
2020-03-25 11:01:18,095 train 1100 6.112888e+00 4.080009 11.778710
2020-03-25 11:07:23,347 train 1200 6.060676e+00 4.500969 12.712388
2020-03-25 11:10:30,048 train_acc 4.697276
2020-03-25 11:10:33,504 valid 000 4.754593e+00 10.839844 27.832031
2020-03-25 11:11:03,920 valid_acc_top1 11.714000
2020-03-25 11:11:03,920 valid_acc_top5 28.784000

You should expect similar accuracy to our pretrained models once the training is finished at 250 epochs.

To train custom architectures, give custom architectures to the --arch flag after adding it in the genotypes.py file as. Note that you can also change the number of cells stacked and number of initial channels of the model by giving arguments to the --layers option and --init_channels option respectively.

With different architectures, the final network will have different computational costs and the default hyperparameters may not be optimal (such as the learning rate scheduling). Thus, you should expect the final accuracy to vary by around 0.2% on the validation accuracy on ImageNet.

Note: we ran our experiments with at least two NVIDIA V100s. For running on a single GPU, omit the --parallel flag and specify the GPU id using the CUDA_VISIBLE_DEVICES environment variable in the command line.

You might also like...
An experimental technique for efficiently exploring neural architectures.
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

Pytorch reimplement of the paper "A Novel Cascade Binary Tagging Framework for Relational Triple Extraction" ACL2020. The original code is written in keras.

CasRel-pytorch-reimplement Pytorch reimplement of the paper "A Novel Cascade Binary Tagging Framework for Relational Triple Extraction" ACL2020. The o

Binary Stochastic Neurons in PyTorch

Binary Stochastic Neurons in PyTorch http://r2rt.com/binary-stochastic-neurons-in-tensorflow.html https://github.com/pytorch/examples/tree/master/mnis

Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Official PyTorch implementation of Spatial Dependency Networks.
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Comments
  • TypeError:forward() missing 1 required positinal argument: 'x'

    TypeError:forward() missing 1 required positinal argument: 'x'

    Hi! Thanks for your excellent contribution to the community! I faces a problem when I tried to simply run your code (search or inference ). It raised " TypeError:forward() missing 1 required positinal argument: 'x' " . This problem probably is in model_search.py line 136 & line 124. Can you take a look at it ?

    opened by lawsonX 1
  • Something wrong about selecting which convs to be binaried

    Something wrong about selecting which convs to be binaried

    In bin_utils_search.py, line 24-30; there is something wrong about selecting which convolution kernels to be binaried. The right way is to choose the 8 * 14 * 4=448 convolutions to be binaried(8 is the number of cells, 14 is the number of lines in a cell, 4 is the options for different convolutions in one line). However, the code choose the top 448 convolutions to be binaried(including res, prepocess0, preprocess1). The cause is that n here has no effect to distinguish. I hope your reply!

    opened by DamonAtSjtu 1
  • Calculating FLOPs for a Binary/XNOR-Net

    Calculating FLOPs for a Binary/XNOR-Net

    Hello, Thanks for providing amazing support to the research community. Your code is very helpful. I was reading your paper and discovered that you have provided FLOPs required vs accuracy. However, when I was looking at your code, I struggled to find the place where the FLOPS were calculated. Would you be happy to share how you calculated the FLOPS count for the binary/xnor-net? Any help regarding this would be much appreciated.

    Kind Regards, Mohaimen

    opened by mohaimenz 0
Releases(v1.0)
  • v1.0(Aug 12, 2020)

    First release of the official PyTorch implementation of our paper Learning Architectures for Binary Networks (https://arxiv.org/abs/2002.06963)

    Source code(tar.gz)
    Source code(zip)
Owner
Computer Vision Lab. @ GIST
Some useful codes for computer vision and machine learning.
Computer Vision Lab. @ GIST
Adaptive Attention Span for Reinforcement Learning

Adaptive Transformers in RL Official implementation of Adaptive Transformers in RL In this work we replicate several results from Stabilizing Transfor

100 Nov 15, 2022
Hierarchical Memory Matching Network for Video Object Segmentation (ICCV 2021)

Hierarchical Memory Matching Network for Video Object Segmentation Hongje Seong, Seoung Wug Oh, Joon-Young Lee, Seongwon Lee, Suhyeon Lee, Euntai Kim

Hongje Seong 72 Dec 14, 2022
Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Utkarsh Agiwal 1 Feb 03, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"

CoTuning Official implementation for NeurIPS 2020 paper Co-Tuning for Transfer Learning. [News] 2021/01/13 The COCO 70 dataset used in the paper is av

THUML @ Tsinghua University 35 Sep 23, 2022
RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into tables through jointly extracting intervention, outcome and outcome measure entities and their relations.

Randomised controlled trial abstract result tabulator RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into

2 Sep 16, 2022
Official repository for the ICCV 2021 paper: UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model.

UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model Official repository for the ICCV 2021 paper: UltraPose: Syn

MomoAILab 92 Dec 21, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 05, 2023
GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs

GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs GraphLily is the first FPGA overlay for graph processing. GraphLily supports a rich se

Cornell Zhang Research Group 39 Dec 13, 2022
Best Practices on Recommendation Systems

Recommenders What's New (February 4, 2021) We have a new relase Recommenders 2021.2! It comes with lots of bug fixes, optimizations and 3 new algorith

Microsoft 14.8k Jan 03, 2023
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer

Time Series Research with Torch 这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer。 建立原因 相较于mxnet和TF,Torch框架中的神经网络层需要提前指定输入维度: # 建立线性层 TensorF

Chi Zhang 85 Dec 29, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
A real-time speech emotion recognition application using Scikit-learn and gradio

Speech-Emotion-Recognition-App A real-time speech emotion recognition application using Scikit-learn and gradio. Requirements librosa==0.6.3 numpy sou

Son Tran 6 Oct 04, 2022
Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation

Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation (CVPR2019) This is a pytorch implementatio

Yawei Luo 280 Jan 01, 2023
Code for ICCV 2021 paper Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes using Scene Graphs

Graph-to-3D This is the official implementation of the paper Graph-to-3d: End-to-End Generation and Manipulation of 3D Scenes Using Scene Graphs | arx

Helisa Dhamo 33 Jan 06, 2023
DrQ-v2: Improved Data-Augmented Reinforcement Learning

DrQ-v2: Improved Data-Augmented RL Agent Method DrQ-v2 is a model-free off-policy algorithm for image-based continuous control. DrQ-v2 builds on DrQ,

Facebook Research 234 Jan 01, 2023