2021
DOI: 10.48550/arxiv.2107.00860
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets

Abstract: Despite the success of recent Neural Architecture Search (NAS) methods on various tasks which have shown to output networks that largely outperform humandesigned networks, conventional NAS methods have mostly tackled the optimization of searching for the network architecture for a single task (dataset), which does not generalize well across multiple tasks (datasets). Moreover, since such task-specific methods search for a neural architecture from scratch for every given task, they incur a large computational c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…BOHB (Falkner, Klein, and Hutter 2018) combines Hyperband (Li et al 2017) and BO by early-stopping bad evaluations while MFES (Li et al 2021a(Li et al , 2022a improves BOHB by taking all evaluations of different resource levels into consideration when sampling new configurations to evaluate. Transfer learning (Lee, Hyung, and Hwang 2021) are applied to learn from previous tasks.…”
Section: Related Workmentioning
confidence: 99%
“…BOHB (Falkner, Klein, and Hutter 2018) combines Hyperband (Li et al 2017) and BO by early-stopping bad evaluations while MFES (Li et al 2021a(Li et al , 2022a improves BOHB by taking all evaluations of different resource levels into consideration when sampling new configurations to evaluate. Transfer learning (Lee, Hyung, and Hwang 2021) are applied to learn from previous tasks.…”
Section: Related Workmentioning
confidence: 99%
“…L EARNING to generate graph-structural data not only requires knowing the nodes' feature distribution, but also a deep understanding of the underlying graph topology, which is essential to modelling various graph instances, such as social networks [1], [2], molecule structures [3], [4], neural architectures [5], recommender systems [6], etc. Conventional likelihood-based graph generative models, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Despite that, it is worth mentioning that such incorporation of preexisting knowledge is a standard premise of all Meta-Learning (MtL) approaches (DUDZIAK et al, 2020;WANG et al, 2020;MUÑOZ et al, 2018). Furthermore, in alignment with prior research in the domains of NAS and MtL (LI et al, 2021;DUDZIAK et al, 2020;HYUNG;HWANG, 2021), it is assumed that the performance metrics of existing models serving as learning source material are readily available during the search process. In an analogous situation, the training times for GenNAS-N, markedly shorter both in NAS-Bench-101 and NAS-Bench-201 when compared to evolutionary, reinforcement, and gradient-based approaches, are derived from prior knowledge.…”
Section: Standalone Predictive Performance Analysismentioning
confidence: 99%