2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2019
DOI: 10.1109/iccad45719.2019.8942095
|View full text |Cite
|
Sign up to set email alerts
|

A Uniform Modeling Methodology for Benchmarking DNN Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 17 publications
0
11
0
Order By: Relevance
“…TANGO [44] employs those metrics to assess CNN models deployed on several hardware platforms. The importance of energy usage is emphasized by Palit et al [62], who present an energy estimation model along with empirical data from well-established CNNs. DNNTune [78] uses inference time and energy consumption to tune both CNNs and quantized networks for several application scenarios.…”
Section: Deep Neural Network Benchmark Analysismentioning
confidence: 99%
“…TANGO [44] employs those metrics to assess CNN models deployed on several hardware platforms. The importance of energy usage is emphasized by Palit et al [62], who present an energy estimation model along with empirical data from well-established CNNs. DNNTune [78] uses inference time and energy consumption to tune both CNNs and quantized networks for several application scenarios.…”
Section: Deep Neural Network Benchmark Analysismentioning
confidence: 99%
“…where: 𝑙is the function; 𝛾 ∈ (0,1) is the factor and 𝜏 ∈ [0,1)is the threshold. Since the operation of lowering dropout probability with the predefined factor 𝛾 is differentiable, we can still optimize the opponent and the network-optimizer through (8) and (9). The compression process will stop when the percentage of left number of parameters in 𝐹 𝑊 (𝑥|𝑧) is smaller than a user-defined value 𝛼 ∈ (0,1).…”
Section: Network Compressing Routinementioning
confidence: 99%
“… memory capacity: neural networks achieve a high performance when using large number of neurons, which in turn requires large memory consumption to hold and process the model [8,9], [10]. As a result, compression could lower the memory requirements.…”
Section: Introduction Formulation Of the Problemmentioning
confidence: 99%
“…TANGO [35] employs the metrics of inference time, power consumption and memory usage to access CNN models implemented on a variety of hardware platforms including an embedded GPU and a FPGA. Palit et al [36] highlight the importance of energy usage by presenting an energy estimation model and empirical data from several CNNs.…”
Section: B Benchmarkingmentioning
confidence: 99%