2020
DOI: 10.48550/arxiv.2003.11236
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GreedyNAS: Towards Fast One-Shot NAS with Greedy Supernet

Abstract: Training a supernet matters for one-shot neural architecture search (NAS) methods since it serves as a basic performance estimator for different architectures (paths). Current methods mainly hold the assumption that a supernet should give a reasonable ranking over all paths. They thus treat all paths equally, and spare much effort to train paths. However, it is harsh for a single supernet to evaluate accurately on such a huge-scale search space (e.g., 7 21 ). In this paper, instead of covering all paths, we ea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 41 publications
(79 reference statements)
0
4
0
Order By: Relevance
“…The super-net is trained only once in search and is then deemed as a performance estimator. Some studies train the super-net by sampling a single path [17,28,52] in a chain-based search space [21,5,36,53]. As a comparison, DARTS-based methods [32,48] introduce architecture parameters jointly optimized with the super-net weights and performs the differentiable search in a cellbased space.…”
Section: Related Workmentioning
confidence: 99%
“…The super-net is trained only once in search and is then deemed as a performance estimator. Some studies train the super-net by sampling a single path [17,28,52] in a chain-based search space [21,5,36,53]. As a comparison, DARTS-based methods [32,48] introduce architecture parameters jointly optimized with the super-net weights and performs the differentiable search in a cellbased space.…”
Section: Related Workmentioning
confidence: 99%
“…A Shadow Batch Normalization is proposed to stabilize the training with Multi-Path activation. GreedyNAS [40] is also Multi-Path architecture search with activating multiple paths. In addition to MixPath and GreedyNAS, CoNAS [10] achieves Multi-Path Architecture Search with Fourier analysis of Boolean functions.…”
Section: Related Workmentioning
confidence: 99%
“…Neural Architecture Search (NAS) has been suggested as the path forward for alleviating the network engineering pain by automatically optimizing architectures that are superior to hand-designed ones. The automatically searched architectures perform competitively in computer vision tasks such as image classification [45,26,46,27,30,9,29,5,36,43,40], object detection [46], semantic segmentation [24,7] and image generation [14].…”
Section: Introductionmentioning
confidence: 99%
“…Existing methods for a fully-automated architecture search are mostly based on two strategies: (i) architecture search schemes [33,15,8,28], which look for optimal neural structures in a given search space 1 , and (ii) weight pruning procedures [11,7,31], which attempt to improve the performance of large (over-parameterized) networks by removing the 'less important' connections. Neural models obtained through these methods have obtained good results on benchmark data sets for image classification (e.g.…”
Section: Introductionmentioning
confidence: 99%