An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Related tags

Deep LearningRLNAS
Overview

Neural Architecture Search with Random Labels(RLNAS)

Introduction

This project provides an implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch. Experiments are evaluated on multiple datasets (NAS-Bench-201 and ImageNet) and multiple search spaces (DARTS-like and MobileNet-like). RLNAS achieves comparable or even better results compared with state-of-the-art NAS methods such as PC-DARTS, Single Path One-Shot, even though the counterparts utilize full ground truth labels for searching. We hope our finding could inspire new understandings on the essential of NAS.

Requirements

  • Pytorch 1.4
  • Python3.5+

Search results

1.Results in NAS-Benchmark-201 search space

nas_201_results

2.Results in DARTS searh space

darts_search_sapce_results

Architeture visualization

1) Architecture searched on CIFAR-10

  • RLDARTS = Genotype(
    normal=[
    ('sep_conv_5x5', 0), ('sep_conv_3x3', 1),
    ('dil_conv_3x3', 0), ('sep_conv_5x5', 2),
    ('sep_conv_3x3', 0), ('dil_conv_5x5', 3),
    ('dil_conv_5x5', 1), ('dil_conv_3x3', 2)],
    normal_concat=[2, 3, 4, 5],
    reduce=[
    ('sep_conv_5x5', 0), ('dil_conv_3x3', 1),
    ('sep_conv_3x3', 0), ('sep_conv_5x5', 2),
    ('dil_conv_3x3', 1), ('sep_conv_3x3', 3),
    ('max_pool_3x3', 1), ('sep_conv_5x5', 2,)],
    reduce_concat=[2, 3, 4, 5])

  • Normal cell: architecture_searched_on_cifar10

  • Reduction cell: architecture_searched_on_cifar10

2) Architecture searched on ImageNet-1k without FLOPs constrain

  • RLDARTS = Genotype( normal=[
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1),
    ('sep_conv_3x3', 1), ('sep_conv_3x3', 2),
    ('sep_conv_3x3', 0), ('sep_conv_5x5', 1),
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)],
    normal_concat=[2, 3, 4, 5],
    reduce=[
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1),
    ('sep_conv_5x5', 0), ('sep_conv_3x3', 2),
    ('sep_conv_5x5', 0), ('sep_conv_5x5', 2),
    ('sep_conv_3x3', 2), ('sep_conv_3x3', 4)],
    reduce_concat=[2, 3, 4, 5])

  • Normal cell: architecture_searched_on_imagenet_no_flops_constrain

  • Reduction cell: architecture_searched_on_cifar10

3) Architecture searched on ImageNet-1k with 600M FLOPs constrain

  • RLDARTS = Genotype(
    normal=[
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1),
    ('skip_connect', 1), ('sep_conv_3x3', 2),
    ('sep_conv_3x3', 1), ('sep_conv_3x3', 2),
    ('skip_connect', 0), ('sep_conv_3x3', 4)],
    normal_concat=[2, 3, 4, 5],
    reduce=[ ('sep_conv_3x3', 0), ('max_pool_3x3', 1),
    ('sep_conv_3x3', 0), ('skip_connect', 1),
    ('sep_conv_3x3', 0), ('dil_conv_3x3', 1),
    ('skip_connect', 0), ('sep_conv_3x3', 1)],
    reduce_concat=[2, 3, 4, 5])

  • Normal cell: architecture_searched_on_imagenet_no_flops_constrain

  • Reduction cell: architecture_searched_on_cifar10

3.Results in MobileNet search space

The MobileNet-like search space proposed in ProxylessNAS is adopted in this paper. The SuperNet contains 21 choice blocks and each block has 7 alternatives:6 MobileNet blocks (combination of kernel size {3,5,7} and expand ratio {3,6}) and ’skip-connect’.

mobilenet_search_sapce_results

Architeture visualization

mobilenet_search_sapce_results

Usage

  • RLNAS in NAS-Benchmark-201

1)enter the work directory

cd nas_bench_201

2)train supernet with random labels

bash ./scripts-search/algos/train_supernet.sh cifar10 0 1

3)evolution search with angle

bash ./scripts-search/algos/evolution_search_with_angle.sh cifar10 0 1

4)calculate correlation

bash ./scripts-search/algos/cal_correlation.sh cifar10 0 1
  • RLNAS in DARTS search space

1)enter the work directory

cd darts_search_space

search architecture on CIFAR-10

cd cifar10/rlnas/

or search architecture on ImageNet

cd imagenet/rlnas/

2)train supernet with random labels

cd train_supernet
bash run_train.sh

3)evolution search with angle

cd evolution_search
cp ../train_supernet/models/checkpoint_epoch_50.pth.tar ./model_and_data/
cp ../train_supernet/models/checkpoint_epoch_0.pth.tar ./model_and_data/
bash run_server.sh
bash run_test.sh

4)architeture evaluation

cd retrain_architetcure

add searched architecture to genotypes.py

bash run_retrain.sh
  • RLNAS in MobileNet search space

The conduct commands are almost the same steps like RLNAS in DARTS search space, excepth that you need run 'bash run_generate_flops_lookup_table.sh' before evolution search.

Note: setup a server for the distributed search

tmux new -s mq_server
sudo apt update
sudo apt install rabbitmq-server
sudo service rabbitmq-server start
sudo rabbitmqctl add_user test test
sudo rabbitmqctl set_permissions -p / test '.*' '.*' '.*'

Before search, please modify host and username in the config file evolution_search/config.py.

Citation

If you find that this project helps your research, please consider citing some of the following papers:

@article{zhang2021neural,
  title={Neural Architecture Search with Random Labels},
  author={Zhang, Xuanyang and Hou, Pengfei and Zhang, Xiangyu and Sun, Jian},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  year={2021}
}
@inproceedings{hu2020angle,
  title={Angle-based search space shrinking for neural architecture search},
  author={Hu, Yiming and Liang, Yuding and Guo, Zichao and Wan, Ruosi and Zhang, Xiangyu and Wei, Yichen and Gu, Qingyi and Sun, Jian},
  booktitle={European Conference on Computer Vision},
  pages={119--134},
  year={2020},
  organization={Springer}
}
@inproceedings{guo2020single,
  title={Single path one-shot neural architecture search with uniform sampling},
  author={Guo, Zichao and Zhang, Xiangyu and Mu, Haoyuan and Heng, Wen and Liu, Zechun and Wei, Yichen and Sun, Jian},
  booktitle={European Conference on Computer Vision},
  pages={544--560},
  year={2020},
  organization={Springer}
}
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image class

Yassine 34 Dec 25, 2022
TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A good teacher is patient and consistent by Beyer et al.

FunMatch-Distillation TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A g

Sayak Paul 67 Dec 20, 2022
This repository contains code and data for "On the Multimodal Person Verification Using Audio-Visual-Thermal Data"

trimodal_person_verification This repository contains the code, and preprocessed dataset featured in "A Study of Multimodal Person Verification Using

ISSAI 7 Aug 31, 2022
CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

CharacterGAN Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wan

Tobias Hinz 181 Dec 27, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 114 Jan 06, 2023
Using LSTM write Tang poetry

本教程将通过一个示例对LSTM进行介绍。通过搭建训练LSTM网络,我们将训练一个模型来生成唐诗。本文将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。并不需要过多专业知识,但是可能需要新手花一些时间来理解的模型训练的实际情况。为了节省时间,请尽量选择GPU进行训练。

56 Dec 15, 2022
Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates

Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates Installation Clone the repository: git clone https://github.com/Zengyi-Qi

Zengyi Qin 3 Oct 18, 2022
Implement some metaheuristics and cost functions

Metaheuristics This repot implement some metaheuristics and cost functions. Metaheuristics JAYA Implement Jaya optimizer without constraints. Cost fun

Adri1G 1 Mar 23, 2022
Clustering is a popular approach to detect patterns in unlabeled data

Visual Clustering Clustering is a popular approach to detect patterns in unlabeled data. Existing clustering methods typically treat samples in a data

Tarek Naous 24 Nov 11, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
EgGateWayGetShell py脚本

EgGateWayGetShell_py 免责声明 由于传播、利用此文所提供的信息而造成的任何直接或者间接的后果及损失,均由使用者本人负责,作者不为此承担任何责任。 使用 python3 eg.py urls.txt 目标 title:锐捷网络-EWEB网管系统 port:4430 漏洞成因 ?p

榆木 61 Nov 09, 2022
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
A 3D Dense mapping backend library of SLAM based on taichi-Lang designed for the aerial swarm.

TaichiSLAM This project is a 3D Dense mapping backend library of SLAM based Taichi-Lang, designed for the aerial swarm. Intro Taichi is an efficient d

XuHao 230 Dec 19, 2022
Active and Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation Hi, good to see you here! 👋 This is code for "Active Testing: Sample-Efficient Model Evaluation". P

Jannik Kossen 19 Oct 30, 2022
Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.

aft-pytorch Unofficial PyTorch implementation of Attention Free Transformer's layers by Zhai, et al. [abs, pdf] from Apple Inc. Installation You can i

Rishabh Anand 184 Dec 12, 2022
In real-world applications of machine learning, reliable and safe systems must consider measures of performance beyond standard test set accuracy

PixMix Introduction In real-world applications of machine learning, reliable and safe systems must consider measures of performance beyond standard te

Andy Zou 79 Dec 30, 2022
[CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning

Transform and Tell: Entity-Aware News Image Captioning This repository contains the code to reproduce the results in our CVPR 2020 paper Transform and

Alasdair Tran 85 Dec 13, 2022
Uncertain natural language inference

Uncertain Natural Language Inference This repository hosts the code for the following paper: Tongfei Chen*, Zhengping Jiang*, Adam Poliak, Keisuke Sak

Tongfei Chen 14 Sep 01, 2022
Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch

Memory Efficient Attention This is unofficial implementation of Self-attention Does Not Need O(n^2) Memory for Jax and PyTorch. Implementation is almo

Amin Rezaei 126 Dec 27, 2022
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022