Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Overview

Lottery Jackpots Exist in Pre-trained Models (Paper Link)

Requirements

  • Python >= 3.7.4
  • Pytorch >= 1.6.1
  • Torchvision >= 0.4.1

Reproduce the Experiment Results

  1. Download the pre-trained models from this link and place them in the pre-train folder.

  2. Select a configuration file in configs to reproduce the experiment results reported in the paper. For example, to find a lottery jackpot with 30 epochs for pruning 95% parameters of ResNet-32 on CIFAR-10, run:

    python cifar.py --config configs/resnet32_cifar10/90sparsity30epoch.yaml --gpus 0

    To find a lottery jackpot with 30 epochs for pruning 90% parameters of ResNet-50 on ImageNet, run:

    python imagenet.py --config configs/resnet50_imagenet/90sparsity30epoch.yaml --gpus 0

    Note that the data_path in the yaml file should be changed to the data

Evaluate Our Pruned Models

We provide configuration, training logs, and pruned models reported in the paper. They can be downloaded from the provided links in the following table:

Model Dataset Sparsity Epoch Top-1 Acc. Link
VGGNet-19 CIFAR-10 90% 30 93.88% link
VGGNet-19 CIFAR-10 90% 160 93.94% link
VGGNet-19 CIFAR-10 95% 30 93.49% link
VGGNet-19 CIFAR-10 95% 160 93.74% link
VGGNet-19 CIFAR-100 90% 30 72.59% link
VGGNet-19 CIFAR-100 90% 160 74.61% link
VGGNet-19 CIFAR-100 95% 30 71.76% link
VGGNet-19 CIFAR-100 95% 160 73.35% link
ResNet-32 CIFAR-10 90% 30 93.70% link
ResNet-32 CIFAR-10 90% 160 94.39% link
ResNet-32 CIFAR-10 95% 30 92.90% link
ResNet-32 CIFAR-10 95% 160 93.41% link
ResNet-32 CIFAR-100 90% 30 72.22% link
ResNet-32 CIFAR-100 90% 160 73.43% link
ResNet-32 CIFAR-100 95% 30 69.38% link
ResNet-32 CIFAR-100 95% 160 70.31% link
ResNet-50 ImageNet 80% 30 74.53% link
ResNet-50 ImageNet 80% 60 75.26% link
ResNet-50 ImageNet 90% 30 72.17% link
ResNet-50 ImageNet 90% 60 72.46% link

To test the our pruned models, download the pruned models and place them in the ckpt folder.

  1. Select a configuration file in configs to test the pruned models. For example, to evaluate a lottery jackpot for pruning ResNet-32 on CIFAR-10, run:

    python evaluate.py --config configs/resnet32_cifar10/evaluate.yaml --gpus 0

    To evaluate a lottery jackpot for pruning ResNet-50 on ImageNet, run:

    python evaluate.py --config configs/resnet50_imagenet/evaluate.yaml --gpus 0

Owner
Yuxin Zhang
Deep Neural Network Compression & Acceleration
Yuxin Zhang
Open-Set Recognition: A Good Closed-Set Classifier is All You Need

Open-Set Recognition: A Good Closed-Set Classifier is All You Need Code for our paper: "Open-Set Recognition: A Good Closed-Set Classifier is All You

194 Jan 03, 2023
Defending graph neural networks against adversarial attacks (NeurIPS 2020)

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks Authors: Xiang Zhang ( Zitnik Lab @ Harvard 44 Dec 07, 2022

Release of SPLASH: Dataset for semantic parse correction with natural language feedback in the context of text-to-SQL parsing

SPLASH: Semantic Parsing with Language Assistance from Humans SPLASH is dataset for the task of semantic parse correction with natural language feedba

Microsoft Research - Language and Information Technologies (MSR LIT) 35 Oct 31, 2022
Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Finite basis physics-informed neural networks (FBPINNs) This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Netwo

Ben Moseley 65 Dec 28, 2022
Human annotated noisy labels for CIFAR-10 and CIFAR-100.

Dataloader for CIFAR-N CIFAR-10N noise_label = torch.load('./data/CIFAR-10_human.pt') clean_label = noise_label['clean_label'] worst_label = noise_lab

<a href=[email protected]"> 117 Nov 30, 2022
A Pytorch implementation of "LegoNet: Efficient Convolutional Neural Networks with Lego Filters" (ICML 2019).

LegoNet This code is the implementation of ICML2019 paper LegoNet: Efficient Convolutional Neural Networks with Lego Filters Run python train.py You c

YangZhaohui 140 Sep 26, 2022
Training DiffWave using variational method from Variational Diffusion Models.

Variational DiffWave Training DiffWave using variational method from Variational Diffusion Models. Quick Start python train_distributed.py discrete_10

Chin-Yun Yu 26 Dec 13, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
Implementation of GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (ICLR 2022).

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation [OpenReview] [arXiv] [Code] The official implementation of GeoDiff: A Geome

Minkai Xu 155 Dec 26, 2022
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022
Adversarial vulnerability of powerful near out-of-distribution detection

Adversarial vulnerability of powerful near out-of-distribution detection by Stanislav Fort In this repository we're collecting replications for the ke

Stanislav Fort 9 Aug 30, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
This repository is a series of notebooks that show solutions for the projects at Dataquest.io.

Dataquest Project Solutions This repository is a series of notebooks that show solutions for the projects at Dataquest.io. Of course, there are always

Dataquest 1.1k Dec 30, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

57 Nov 28, 2022
Self-supervised learning optimally robust representations for domain generalization.

OptDom: Learning Optimal Representations for Domain Generalization This repository contains the official implementation for Optimal Representations fo

Yangjun Ruan 18 Aug 25, 2022
Post-training Quantization for Neural Networks with Provable Guarantees

Post-training Quantization for Neural Networks with Provable Guarantees Authors: Jinjie Zhang ( Yixuan Zhou 2 Nov 29, 2022

General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 118 Dec 27, 2022
Sign Language Transformers (CVPR'20)

Sign Language Transformers (CVPR'20) This repo contains the training and evaluation code for the paper Sign Language Transformers: Sign Language Trans

Necati Cihan Camgoz 164 Dec 30, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022