2020
DOI: 10.1145/3366636
|View full text |Cite
|
Sign up to set email alerts
|

Design and Optimization of Energy-Accuracy Tradeoff Networks for Mobile Platforms via Pretrained Deep Models

Abstract: Many real-world edge applications including object detection, robotics, and smart health are enabled by deploying deep neural networks (DNNs) on energy-constrained mobile platforms. In this article, we propose a novel approach to trade off energy and accuracy of inference at runtime using a design space called Learning Energy Accuracy Tradeoff Networks (LEANets). The key idea behind LEANets is to design classifiers of increasing complexity using pretrained DNNs to perform input-specific adaptive inference. The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 29 publications
0
14
0
Order By: Relevance
“…While DNN models can be deployed on these inteligent edge platforms by specific runtime systems, which are usually closed-source or unmodifiable, the model compression techniques can be used to further optimize the inference performance. Besides, there are several studies on the adaptive inference for optimizing deep learning on embedded platforms, including adaptive strategies for neural network inference [22]- [25] and hardware/software co-design [26]- [28], which allow deep neural networks to be configurable and executed dynamically at runtime based on the resource constraints.…”
Section: Background and Related Workmentioning
confidence: 99%
“…While DNN models can be deployed on these inteligent edge platforms by specific runtime systems, which are usually closed-source or unmodifiable, the model compression techniques can be used to further optimize the inference performance. Besides, there are several studies on the adaptive inference for optimizing deep learning on embedded platforms, including adaptive strategies for neural network inference [22]- [25] and hardware/software co-design [26]- [28], which allow deep neural networks to be configurable and executed dynamically at runtime based on the resource constraints.…”
Section: Background and Related Workmentioning
confidence: 99%
“…For instance, BinaryNet [Courbariaux and Bengio, 2016] limits the parameters to have 1-bit representations. Recently, [Jayakodi et al, 2020] proposed to design classifiers of increasing complexity using pre-trained Convolutional Neural Networks to perform input-specific adaptive inference.…”
Section: Model Compressionmentioning
confidence: 99%
“…Moreover, this very specific formulation is known as cardinality constrained knapsack or Exact K-item Knapsack Problem(E-KKP). This problem is proved to be NP-Complete [Kellerer et al, 2004].…”
Section: Theoretical Analysis Of the Problemmentioning
confidence: 99%
See 2 more Smart Citations