The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2010 IEEE/ACM Int'l Conference on Green Computing and Communications &Amp; Int'l Conference on Cyber, Physical and Social Compu 2010
DOI: 10.1109/greencom-cpscom.2010.25
|View full text |Cite
|
Sign up to set email alerts
|

An Embedded Software Power Model Based on Algorithm Complexity Using Back-Propagation Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…We focus on the second type as our goal is to establish real-time power monitoring on cloud servers without any extra metering devices. Li et al [26] built a software/program power consumption model using BPNN. The model takes as input the target program's time complexity, space complexity and data size, and was experimentally proved accurate.…”
Section: Modeling Time Series Of Power With Annmentioning
confidence: 99%
“…We focus on the second type as our goal is to establish real-time power monitoring on cloud servers without any extra metering devices. Li et al [26] built a software/program power consumption model using BPNN. The model takes as input the target program's time complexity, space complexity and data size, and was experimentally proved accurate.…”
Section: Modeling Time Series Of Power With Annmentioning
confidence: 99%
“…It will take more time and computing power to train the model to the convergence stage as the number of parameters to be optimized rises. The server energy consumption was forecasted via Q-learning, B-ANN, MLP and other reinforcement learning techniques (Shen et al, 2013;Li et al, 2010;Islam et al, 2012;Moreno and Xu, 2012;Caglar and Gokhale, 2014;et al, Tesauro et al, 2017). However, before the training effect may truly improve, reinforcement learning necessitates experience accumulation to a significant level.…”
Section: Figurementioning
confidence: 99%
“…It follows, = ( ) * (8) where G(L) is the equivalent data size mapped from the ATC complexity and L is actual input data size of the application. G(L) takes the following expressions according to the complexity of the application, 1, , , × log , , [24] [25] (e.g., for the voice recognition algorithm, G(L)= 2 ). X is a Gamma distributed random variable [23], and its probability distribution function (PDF) is given by,…”
Section: Optimal Computation Energy In Me Modelmentioning
confidence: 99%