2021
DOI: 10.48550/arxiv.2103.10584
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark

Chaojian Li,
Zhongzhi Yu,
Yonggan Fu
et al.

Abstract: HardWare-aware Neural Architecture Search (HW-NAS) has recently gained tremendous attention by automating the design of deep neural networks deployed in more resource-constrained daily life devices. Despite its promising performance, developing optimal HW-NAS solutions can be prohibitively challenging as it requires cross-disciplinary knowledge in the algorithm, micro-architecture, and device-specific compilation. First, to determine the hardware-cost to be incorporated into the NAS process, existing works mos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(21 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…We also note a few limitations of our work. We have performed experiments over 14 NAS tasks, but there are new benchmarks that have been published recently [37,48,41,31]. We have adopted a conservative multi-fidelity technique, but less conservative multi-fidelity techniques are desired as long as robustness can be maintained.…”
Section: Conclusion and Limitationsmentioning
confidence: 99%
“…We also note a few limitations of our work. We have performed experiments over 14 NAS tasks, but there are new benchmarks that have been published recently [37,48,41,31]. We have adopted a conservative multi-fidelity technique, but less conservative multi-fidelity techniques are desired as long as robustness can be maintained.…”
Section: Conclusion and Limitationsmentioning
confidence: 99%
“…overclocking, memory swapping, power usage maximization). Similar to HW-NAS-Bench [11], MAPLE-Edge leverages this domain knowledge to build an optimized, generic hardwarecost collection pipeline that automates the process of collecting latency measurements (as seen in Figure 1).…”
Section: Data Collection Pipelinementioning
confidence: 99%
“…Similar to MAPLE [1], BRP-NAS [7], HELP [10], and HW-NAS-Bench [11], MAPLE-Edge also uses the NAS-BENCH-201 [6] dataset for all experiments. NAS-BENCH-201 is a collection of 15,625 neural cell candidates with each architecture having a fixed cell topology with five possible operations in its search space including {none, skip-connection, conv1x1, conv3x3, avgpool3x3}.…”
Section: Edge Datasetmentioning
confidence: 99%
“…For this reason, NAS-Bench-101 [19] and -201 [48] are proposed to solve the reproducibility issues and make a fair evaluation for NAS research. Recently, Hardware-NAS-Bench [72] was proposed to benchmark the inference latency and energy on several hardware devices. Despite the progress of benchmarks in NAS and other areas, model quantization lacks such standard to foster the reproducibility and deployability.…”
Section: Related Workmentioning
confidence: 99%