This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

Related tags

Deep LearningSeerNet
Overview

SeerNet

​ This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is in submission to TPAMI. This repo contains active sampling for training the performance predictor, optimizing the compression policy and finetuning on two datasets(VGG-small, ResNet20 on Cifar-10; ResNet18, MobileNetv2, ResNet50 on ImageNet) using our proposed SeerNet.

​ As for the entire pipeline, we firstly get a few random samples to pretrain the MLP predictor. After getting the pretrained predictor, we execute active sampling using evolution search to get samples, which are used to further optimize the predictor above. Then we search for optimal compression policy under given constraint utilizing the predictor. Finally, we finetune the policy until convergence.

Quick Start

Prerequisites

  • python>=3.5
  • pytorch>=1.1.0
  • torchvision>=0.3.0
  • other packages like numpy and sklearn

Dataset

If you already have the ImageNet dataset for pytorch, you could create a link to data folder and use it:

# prepare dataset, change the path to your own
ln -s /path/to/imagenet/ data/

If you don't have the ImageNet, you can use the following script to download it: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

Active Sampling

You can run the following command to actively search the samples by evolution algorithm:

CUDA_VISIBLE_DEVICES=0 python PGD/search.py --sample_path=results/res18/resnet18_sample.npy --acc_path=results/res18/resnet18_acc.npy --lr=0.2 --batch=400 --epoch=1000 --save_path=search_result.npy --dim=57

Training performance predictor

You can run the following command to training the MLP predictor:

CUDA_VISIBLE_DEVICES=0 python PGD/regression/regression.py --sample_path=../results/res18/resnet18_sample.npy --acc_path=../results/res18/resnet18_acc.npy --lr=0.2 --batch=400 --epoch=5000 --dim=57

Compression Policy Optimization

After training the performance predictor, you can run the following command to optimize the compression policy:


# for resnet18, please use
python PGD/pgd_search.py --arch qresnet18 --layer_nums 19 --step_size 0.005 --max_bops 30 --pretrained_weight path\to\weight 


# for mobilenetv2, please use
python PGD/pgd_search.py --arch qmobilenetv2 --layer_nums 53 --step_size 0.005 --max_bops 8 --pretrained_weight path\to\weight 


# for resnet50, please use
python PGD/pgd_search.py --arch qresnet50 --layer_nums 52 --step_size 0.005 --max_bops 65 --pretrained_weight path\to\weight 

Finetune Policy

After optimizing, you can get the optimal quantization and pruning strategy list, and you can replace the strategy list in finetune_imagenet.py to finetune and evaluate the performance on ImageNet dataset. You can also use the default strategy to reproduce the results in our paper.

For finetuning ResNet18 on ImageNet, please run:

bash run/finetune_resnet18.sh

For finetuning MobileNetv2 on ImageNet, please run:

bash run/finetune_mobilenetv2.sh

For finetuning ResNet50 on ImageNet, please run:

bash run/finetune_resnet50.sh
Owner
IVG Lab, Department of Automation, Tsinghua Univeristy
Team Enigma at ArgMining 2021 Shared Task: Leveraging Pretrained Language Models for Key Point Matching

Team Enigma at ArgMining 2021 Shared Task: Leveraging Pretrained Language Models for Key Point Matching This is our attempt of the shared task on Quan

Manav Nitin Kapadnis 12 Jul 08, 2022
Differentiable scientific computing library

xitorch: differentiable scientific computing library xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely

98 Dec 26, 2022
Active window border replacement for window managers.

xborder Active window border replacement for window managers. Usage git clone https://github.com/deter0/xborder cd xborder chmod +x xborders ./xborder

deter 250 Dec 30, 2022
Repository for self-supervised landmark discovery

self-supervised-landmarks Repository for self-supervised landmark discovery Requirements pytorch pynrrd (for 3d images) Usage The use of this models i

Riddhish Bhalodia 2 Apr 18, 2022
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

246 Dec 11, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)

UPDeT Official Implementation of UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers (ICLR 2021 spotlight) The

hhhusiyi 96 Dec 22, 2022
A Keras implementation of YOLOv4 (Tensorflow backend)

keras-yolo4 请使用更完善的版本: https://github.com/miemie2013/Keras-YOLOv4 Please visit here for more complete model: https://github.com/miemie2013/Keras-YOLOv

384 Nov 29, 2022
Resco: A simple python package that report the effect of deep residual learning

resco Description resco is a simple python package that report the effect of dee

Pierre-Arthur Claudé 1 Jun 28, 2022
Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh

generate_cloud_points Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh. Run python disp_mesh.py Or you

Peng Yu 2 Dec 24, 2021
A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maximum bidding

Business Problem A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maxim

Kübra Bilinmiş 1 Jan 15, 2022
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Madry Lab 35 Nov 14, 2022
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 06, 2023
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022
Implementing DropPath/StochasticDepth in PyTorch

%load_ext memory_profiler Implementing Stochastic Depth/Drop Path In PyTorch DropPath is available on glasses my computer vision library! Introduction

Francesco Saverio Zuppichini 13 Jan 05, 2023
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
Plato: A New Framework for Federated Learning Research

a new software framework to facilitate scalable federated learning research.

System <a href=[email protected] Lab"> 192 Jan 05, 2023
Official code base for the poster "On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation" published in NeurIPS 2021 Workshop (SVRHM)

Self-Supervised Learning (SimCLR) with Biological Plausible Image Augmentations Official code base for the poster "On the use of Cortical Magnificatio

Binxu 8 Aug 17, 2022
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
Notes taking website build with Docker + Django + React.

Notes website. Try it in browser! / But how to run? Description. This is monorepository with notes website. Website provides web interface for creatin

Kirill Zhosul 2 Jul 27, 2022