2009 IEEE International Symposium on Parallel &Amp; Distributed Processing 2009
DOI: 10.1109/ipdps.2009.5160942
|View full text |Cite
|
Sign up to set email alerts
|

Triple-C: Resource-usage prediction for semi-automatic parallelization of groups of dynamic image-processing tasks

Abstract: With the emergence of dynamic video processing, such as in image analysis, runtime estimation of resource usage would be highly attractive for automatic parallelization and QoS control with shared resources. A possible solution is to characterize the application execution using model descriptions of the resource usage. In this paper, we introduce Triple-C, a prediction model for Computation, Cache-memory and Communication-bandwidth usage with scenario-based Markov chains. As a typical application, we explore a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…More critically, most approaches do require an estimate of the execution time for each job and application to make efficient and accurate allocation and scheduling decisions (Kim et al, 2009; Garg et al, 2010). Typically, these estimates are obtained by analytical modeling of the underlying source code (which is not always available), or using empirical and historical data of each job and application type (Albers et al, 2009; Kim et al, 2009; Garg et al, 2010; Matsunaga and Fortes, 2010). However, when the job to be executed (in our case a registration) is homogeneous (i.e.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…More critically, most approaches do require an estimate of the execution time for each job and application to make efficient and accurate allocation and scheduling decisions (Kim et al, 2009; Garg et al, 2010). Typically, these estimates are obtained by analytical modeling of the underlying source code (which is not always available), or using empirical and historical data of each job and application type (Albers et al, 2009; Kim et al, 2009; Garg et al, 2010; Matsunaga and Fortes, 2010). However, when the job to be executed (in our case a registration) is homogeneous (i.e.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Most works in the literature related to scheduling or resource-provisioning would assume either the worst-case scenario for the number of cycles needed for an application (by source code profiling; see Li et al, 2009), or they would rely on a historical average estimate (Smith et al, 1998; Kapadia et al, 1999), or learned models of the usage of computational resources for an application (Albers et al, 2009; Matsunaga and Fortes, 2010). For the same application (e.g.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The purpose of this work is to do predictive analytics on resource allocation using machine learning methods. Using algorithms, including machine learning algorithms, to predict required resources for jobs has been pursued by several previous studies [3][4][5][6][7][8]. Using historical data is a reasonable method to improve the performance of the schedulers in order to utilize the overall HPC system efficiently [9].…”
Section: Introductionmentioning
confidence: 99%