2020
DOI: 10.1007/978-3-030-59830-3_26
|View full text |Cite
|
Sign up to set email alerts
|

PD-DARTS: Progressive Discretization Differentiable Architecture Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

15
2,669
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 757 publications
(2,685 citation statements)
references
References 2 publications
15
2,669
0
1
Order By: Relevance
“…NAS vs. D/T-NAS. Traditional NAS (e.g., DARTS [25]) randomly sample half data from the training set as support set while the remaining half as query set. In contrast, D/T-NAS ( Fig.…”
Section: Domain/type-aware Meta-nasmentioning
confidence: 99%
“…NAS vs. D/T-NAS. Traditional NAS (e.g., DARTS [25]) randomly sample half data from the training set as support set while the remaining half as query set. In contrast, D/T-NAS ( Fig.…”
Section: Domain/type-aware Meta-nasmentioning
confidence: 99%
“…Lorraine et al [108] introduce an algorithm for inexpensive GD-based hyperparameter optimization. Liu et al [109] employ GD in the DARTS algorithm, which optimizes both the network weights and the architecture. The authors use relaxation tricks to make a weighted sum of candidate operations differentiable, and then apply the gradient descent method to directly train the weights.…”
Section: Nas Based On Gdmentioning
confidence: 99%
“…The authors use relaxation tricks to make a weighted sum of candidate operations differentiable, and then apply the gradient descent method to directly train the weights. Inspired by DARTS [109], Dong et al [110] introduce gradientbased search using the differentiable architecture sampler (GDAS) method. The authors develop a differentiable architecture sampler which samples individual architectures in a differentiable way to accelerate the architecture search procedure.…”
Section: Nas Based On Gdmentioning
confidence: 99%
“…Impressive results have been shown for reinforcement learning (RL) for example [31], [32]. Recent methods like differentiable architecture search (DARTs) [33] reduce the search time by formulating the task in a differentiable manner. To reduce the redundancy in the network space, partiallyconnected DARTs (PC-DARTs) was recently introduced to perform a more efficient search without compromising the performance of DARTS [34].…”
Section: F Neural Architecture Searchmentioning
confidence: 99%