Open source implementation of AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

Overview

AceNAS

This repo is the experiment code of AceNAS, and is not considered as an official release. We are working on integrating AceNAS as a built-in strategy provided in NNI.

Data Preparation

  1. Download our prepared data from Google Drive. The directory should look like this:
data
├── checkpoints
│   ├── acenas-m1.pth.tar
│   ├── acenas-m2.pth.tar
│   └── acenas-m3.pth.tar
├── gcn
│   ├── nasbench101_gt_all.pkl
│   ├── nasbench201cifar10_gt_all.pkl
│   ├── nasbench201_gt_all.pkl
│   ├── nasbench201imagenet_gt_all.pkl
│   ├── nds_amoeba_gt_all.pkl
│   ├── nds_amoebaim_gt_all.pkl
│   ├── nds_dartsfixwd_gt_all.pkl
│   ├── nds_darts_gt_all.pkl
│   ├── nds_dartsim_gt_all.pkl
│   ├── nds_enasfixwd_gt_all.pkl
│   ├── nds_enas_gt_all.pkl
│   ├── nds_enasim_gt_all.pkl
│   ├── nds_nasnet_gt_all.pkl
│   ├── nds_nasnetim_gt_all.pkl
│   ├── nds_pnasfixwd_gt_all.pkl
│   ├── nds_pnas_gt_all.pkl
│   ├── nds_pnasim_gt_all.pkl
│   ├── nds_supernet_evaluate_all_test1_amoeba.json
│   ├── nds_supernet_evaluate_all_test1_dartsfixwd.json
│   ├── nds_supernet_evaluate_all_test1_darts.json
│   ├── nds_supernet_evaluate_all_test1_enasfixwd.json
│   ├── nds_supernet_evaluate_all_test1_enas.json
│   ├── nds_supernet_evaluate_all_test1_nasnet.json
│   ├── nds_supernet_evaluate_all_test1_pnasfixwd.json
│   ├── nds_supernet_evaluate_all_test1_pnas.json
│   ├── supernet_evaluate_all_test1_nasbench101.json
│   ├── supernet_evaluate_all_test1_nasbench201cifar10.json
│   ├── supernet_evaluate_all_test1_nasbench201imagenet.json
│   └── supernet_evaluate_all_test1_nasbench201.json
├── nb201
│   ├── split-cifar100.txt
│   ├── split-cifar10-valid.txt
│   └── split-imagenet-16-120.txt
├── proxyless
│   ├── imagenet
│   │   ├── augment_files.txt
│   │   ├── test_files.txt
│   │   ├── train_files.txt
│   │   └── val_files.txt
│   ├── proxyless-84ms-train.csv
│   ├── proxyless-ws-results.csv
│   └── tunas-proxylessnas-search.csv
└── tunas
    ├── imagenet_valid_split_filenames.txt
    ├── random_architectures.csv
    └── searched_architectures.csv
  1. (Required for benchmark experiments) Download CIFAR-10, CIFAR-100, ImageNet-16-120 dataset and also put them under data.
data
├── cifar10
│   └── cifar-10-batches-py
│       ├── batches.meta
│       ├── data_batch_1
│       ├── data_batch_2
│       ├── data_batch_3
│       ├── data_batch_4
│       ├── data_batch_5
│       ├── readme.html
│       └── test_batch
├── cifar100
│   └── cifar-100-python
│       ├── meta
│       ├── test
│       └── train
└── imagenet16
    ├── train_data_batch_1
    ├── train_data_batch_10
    ├── train_data_batch_2
    ├── train_data_batch_3
    ├── train_data_batch_4
    ├── train_data_batch_5
    ├── train_data_batch_6
    ├── train_data_batch_7
    ├── train_data_batch_8
    ├── train_data_batch_9
    └── val_data
  1. (Required for ImageNet experiments) Prepare ImageNet. You can put it anywhere.

  2. (Optional) Copy tunas (https://github.com/google-research/google-research/tree/master/tunas) to a folder named tunas.

Evaluate pre-trained models.

We provide 3 checkpoints obtained from 3 different runs in data/checkpoints. Please evaluate them via the following command.

python -m tools.standalone.imagenet_eval acenas-m1 /path/to/your/imagenet
python -m tools.standalone.imagenet_eval acenas-m2 /path/to/your/imagenet
python -m tools.standalone.imagenet_eval acenas-m3 /path/to/your/imagenet

Train supernet

python -m tools.supernet.nasbench101 experiments/supernet/nasbench101.yml
python -m tools.supernet.nasbench201 experiments/supernet/nasbench201.yml
python -m tools.supernet.nds experiments/supernet/darts.yml
python -m tools.supernet.proxylessnas experiments/supernet/proxylessnas.yml

Please refer to experiments/supernet folder for more configurations.

Benchmark experiments

We've already provided weight-sharing results from supernet so that you do not have to train you own. The provided files can be found in json files located under data/gcn.

# pretrain
python -m gcn.benchmarks.pretrain data/gcn/supernet_evaluate_all_test1_${SEARCHSPACE}.json data/gcn/${SEARCHSPACE}_gt_all.pkl --metric_keys top1 flops params
# finetune
python -m gcn.benchmarks.train --use_train_samples --budget {budget} --test_dataset data/gcn/${SEARCHSPACE}_gt_all.pkl --iteration 5 \
    --loss lambdarank --gnn_type gcn --early_stop_patience 50 --learning_rate 0.005 --opt_type adam --wd 5e-4 --epochs 300 --bs 20 \
    --resume /path/to/previous/output.pt

Running baselines

BRP-NAS:

# pretrain
python -m gcn.benchmarks.pretrain data/gcn/supernet_evaluate_all_test1_${SEARCHSPACE}.json data/gcn/${SEARCHSPACE}_gt_all.pkl --metric_keys flops
# finetune
python -m gcn.benchmarks.train --use_train_samples --budget ${BUDGET} --test_dataset data/gcn/${SEARCHSPACE}_gt_all.pkl --iteration 5 \
    --loss brp --gnn_type brp --early_stop_patience 35 --learning_rate 0.00035 \
    --opt_type adamw --wd 5e-4 --epochs 250 --bs 64 --resume /path/to/previous/output.pt

Vanilla:

python -m gcn.benchmarks.train --use_train_samples --budget ${BUDGET} --test_dataset data/gcn/${SEARCHSPACE}_gt_all.pkl --iteration 1 \
    --loss mse --gnn_type vanilla --n_hidden 144 --learning_rate 2e-4 --opt_type adam --wd 1e-3 --epochs 300 --bs 10

ProxylessNAS search space

Train GCN

python -m gcn.proxyless.pretrain --metric_keys ws_accuracy simulated_pixel1_time_ms flops params
python -m gcn.proxyless.train --loss lambdarank --early_stop_patience 50 --learning_rate 0.002 --opt_type adam --wd 5e-4 --epochs 300 --bs 20 \
    --resume /path/to/previous/output.pth

Train final model

Validation set:

python -m torch.distributed.launch --nproc_per_node=16 \
    --use_env --module \
    tools.standalone.imagenet_train \
    --output "$OUTPUT_DIR" "$ARCH" "$IMAGENET_DIR" \
    -b 256 --lr 2.64 --warmup-lr 0.1 \
    --warmup-epochs 5 --epochs 90 --sched cosine --num-classes 1000 \
    --opt rmsproptf --opt-eps 1. --weight-decay 4e-5 -j 8 --dist-bn reduce \
    --bn-momentum 0.01 --bn-eps 0.001 --drop 0. --no-held-out-val

Test set:

python -m torch.distributed.launch --nproc_per_node=16 \
    --use_env --module \
    tools.standalone.imagenet_train \
    --output "$OUTPUT_DIR" "$ARCH" "$IMAGENET_DIR" \
    -b 256 --lr 2.64 --warmup-lr 0.1 \
    --warmup-epochs 9 --epochs 360 --sched cosine --num-classes 1000 \
    --opt rmsproptf --opt-eps 1. --weight-decay 4e-5 -j 8 --dist-bn reduce \
    --bn-momentum 0.01 --bn-eps 0.001 --drop 0.15
Owner
Yuge Zhang
Yuge Zhang
TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

tf-metal-experiments TensorFlow Metal Backend on Apple Silicon Experiments (just for fun) Setup This is tested on M1 series Apple Silicon SOC only. Te

Timothy Liu 161 Jan 03, 2023
Text completion with Hugging Face and TensorFlow.js running on Node.js

Katana ML Text Completion 🤗 Description Runs with with Hugging Face DistilBERT and TensorFlow.js on Node.js distilbert-model - converter from Hugging

Katana ML 2 Nov 04, 2022
LoL Runes Recommender With Python

LoL-Runes-Recommender Para ejecutar la aplicación se debe llamar a execute_app.p

Sebastián Salinas 1 Jan 10, 2022
Post-Training Quantization for Vision transformers.

PTQ4ViT Post-Training Quantization Framework for Vision Transformers. We use the twin uniform quantization method to reduce the quantization error on

Zhihang Yuan 61 Dec 28, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

TianYuan 27 Nov 07, 2022
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs ArXiv Abstract Convolutional Neural Networks (CNNs) have become the de f

Philipp Benz 12 Oct 24, 2022
source code of Adversarial Feedback Loop Paper

Adversarial Feedback Loop [ArXiv] [project page] Official repository of Adversarial Feedback Loop paper Firas Shama, Roey Mechrez, Alon Shoshan, Lihi

17 Jul 20, 2022
Benchmark for evaluating open-ended generation

OpenMEVA Contributed by Jian Guan, Zhexin Zhang. Thank Jiaxin Wen for DeBugging. OpenMEVA is a benchmark for evaluating open-ended story generation me

25 Nov 15, 2022
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others

livelossplot Don't train deep learning models blindfolded! Be impatient and look at each epoch of your training! (RECENT CHANGES, EXAMPLES IN COLAB, A

Piotr Migdał 1.2k Jan 08, 2023
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
PyTorch implementation of MulMON

MulMON This repository contains a PyTorch implementation of the paper: Learning Object-Centric Representations of Multi-object Scenes from Multiple Vi

NanboLi 16 Nov 03, 2022
FEMDA: Robust classification with Flexible Discriminant Analysis in heterogeneous data

FEMDA: Robust classification with Flexible Discriminant Analysis in heterogeneous data. Flexible EM-Inspired Discriminant Analysis is a robust supervised classification algorithm that performs well i

0 Sep 06, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the Machine Learning 4 Health Workshop

Detection-aided liver lesion segmentation Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the

Image Processing Group - BarcelonaTECH - UPC 96 Oct 26, 2022
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Guanying Chen 64 Nov 19, 2022
Generalized Data Weighting via Class-level Gradient Manipulation

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

18 Nov 12, 2022
Rank 3 : Source code for OPPO 6G Data Generation Challenge

OPPO 6G Data Generation with an E2E Framework Homepage of OPPO 6G Data Generation Challenge Datasets H1_32T4R.mat H2_32T4R.mat Please put the original

Sen Pei 97 Jan 07, 2023
git《Commonsense Knowledge Base Completion with Structural and Semantic Context》(AAAI 2020) GitHub: [fig1]

Commonsense Knowledge Base Completion with Structural and Semantic Context Code for the paper Commonsense Knowledge Base Completion with Structural an

AI2 96 Nov 05, 2022