2009 International Conference on Computational Intelligence and Software Engineering 2009
DOI: 10.1109/cise.2009.5363620
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of Prefetching Thread for Linked Data Structure in CMPs

Abstract: Chip Multiprocessor (CMP) presents new opportunities to data prefetching. Prefetching thread is a well known approach to reduce memory latency and to improve performance, and has been explored in different applications. However, for applications with linked data structure(LDS), prefetching thread tends to achieve little overall performance gains. In this paper, we analyze the performance of conventional prefetching thread by an example and five selected benchmarks from Olden benchmark suite. The experimental r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
11
0

Year Published

2011
2011
2012
2012

Publication Types

Select...
2
1

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(11 citation statements)
references
References 8 publications
(5 reference statements)
0
11
0
Order By: Relevance
“…To deal with irregular access patterns that are hard to predict, helper thread based prefetching techniques have been proposed [18][19][20][21][22][23][24][25][26][27]. Helper threaded prefetching is a technique that utilizes a second core or logical processor in a multi-threaded system to improve the performance of the main thread.…”
Section: Helper Threaded Prefetching Designmentioning
confidence: 99%
See 1 more Smart Citation
“…To deal with irregular access patterns that are hard to predict, helper thread based prefetching techniques have been proposed [18][19][20][21][22][23][24][25][26][27]. Helper threaded prefetching is a technique that utilizes a second core or logical processor in a multi-threaded system to improve the performance of the main thread.…”
Section: Helper Threaded Prefetching Designmentioning
confidence: 99%
“…As indicated in [20,21], the characteristics of operations in hotspot affect the performance of the helper thread. Hence, we define Computation/Access Latency Ratio (CALR), which is the result of cycles in computation divided by cycles in memory accesses.…”
Section: Terminologymentioning
confidence: 99%
“…To tolerate memory access latency, there have been a plethora of proposals for data prefetching [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Data prefetching techniques improve performance by predicting future memory accesses and fetch them in cache before they are accessed.…”
mentioning
confidence: 99%
“…Helper thread based prefetching techniques [19][20][21][22][23][24][25][26][27] are promising methods to deal with irregular access patterns that are hard to predict. However, because LDS are traversed in a way that prevents individual accesses from being overlapped, conventional helper thread based prefetching techniques have some problems with LDS programs.…”
mentioning
confidence: 99%
See 1 more Smart Citation