2022
DOI: 10.1109/tcad.2021.3100249
|View full text |Cite
|
Sign up to set email alerts
|

Designing Efficient DNNs via Hardware-Aware Neural Architecture Search and Beyond

Abstract: Designing efficient DNNs via hardware-aware neural architecture search and beyond

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 38 publications
(118 reference statements)
0
0
0
Order By: Relevance
“…To overcome such limitations, several e cient latency prediction strategies have been recently established. For example, [9,13,14,61,82,99] leverage the latency lookup table to approximate the on-device latency for di↵erent architecture candidates. In parallel, [12,24,81,85,92,100,101] turn back to learning-based regression approaches for the latency prediction purpose, which typically train an accurate latency predictor 2.1.…”
Section: Speedup Search Techniquesmentioning
confidence: 99%
“…To overcome such limitations, several e cient latency prediction strategies have been recently established. For example, [9,13,14,61,82,99] leverage the latency lookup table to approximate the on-device latency for di↵erent architecture candidates. In parallel, [12,24,81,85,92,100,101] turn back to learning-based regression approaches for the latency prediction purpose, which typically train an accurate latency predictor 2.1.…”
Section: Speedup Search Techniquesmentioning
confidence: 99%