2016
DOI: 10.1109/tpds.2015.2442983
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Cross-Core Performance Interference on Multicore Processors with Regression Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 45 publications
0
16
0
Order By: Relevance
“…Predictive Modeling. Recent studies have shown that machine learning based predictive modeling is effective in code optimization [43], [44], performance predicting [45], [46], parallelism mapping [20], [47], [48], [49], [50], and task scheduling [51], [52], [53], [54], [55], [56]. Its great advantage is its ability to adapt to the ever-changing platforms as it has no prior assumption about their behavior.…”
Section: Domain-specific Optimizationsmentioning
confidence: 99%
“…Predictive Modeling. Recent studies have shown that machine learning based predictive modeling is effective in code optimization [43], [44], performance predicting [45], [46], parallelism mapping [20], [47], [48], [49], [50], and task scheduling [51], [52], [53], [54], [55], [56]. Its great advantage is its ability to adapt to the ever-changing platforms as it has no prior assumption about their behavior.…”
Section: Domain-specific Optimizationsmentioning
confidence: 99%
“…For instance, given the performance of a task on an idle node, the performance on a loaded node can be predicted using time series forecasting methods [66]. Another approach to predict performance degradation due to resource sharing is based on the current load on shared CPU resources, as revealed by hardware performance counters [67,68].…”
Section: Principal Performance Factorsmentioning
confidence: 99%
“…Zhao et al [68] predict the execution duration of a task in relationship to the number of threads by applying a thread resource contention model (Section 4.4). Each thread is treated like an independent single-threaded instance of the task, contending for CPU resources.…”
Section: Scale Modeling At the Task Levelmentioning
confidence: 99%
“…For multi-threaded CPU tasks, we assume the performance would be perfectly scaled up with the number of threads. The assumption is reasonable since MapReduce applications are data parallel and do not include synchronization inside a task, and the performance interference across multiple threads caused by shared cache and bandwidth contentions can be predicted using the approach in [39], which is ignored in this paper. So the data processing time under a given configuration k can be computed using Equation 5, and the data reading time will be discussed in Section 4.4.…”
Section: Modeling Data Processing Speedmentioning
confidence: 99%
“…There has been a lot of work addressing contentions on shared cache [10,23,38], memory bandwidth [15,37,36], memory subsystem [26,27,40,17,33,34,39], and I/O resource [6,35,24]. Meanwhile, GPUPerf [31] has been developed to predict the performance and understand bottlenecks of GPGPU applications.…”
Section: Related Workmentioning
confidence: 99%