Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation 2010
DOI: 10.1145/1806596.1806605
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2011
2011
2013
2013

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 37 publications
0
9
0
Order By: Relevance
“…Researchers recently have started to explore using code transformations and restructuring to improve cache sharing and reduce contention on multicores [12,13,27,28,30,36]. Most such research focuses on compilation techniques to improve cache sharing for a multi-threaded application.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers recently have started to explore using code transformations and restructuring to improve cache sharing and reduce contention on multicores [12,13,27,28,30,36]. Most such research focuses on compilation techniques to improve cache sharing for a multi-threaded application.…”
Section: Related Workmentioning
confidence: 99%
“…However, the problem is far from solved in real-world deployments. A number of novel hardware solutions have been proposed [12,13,27,28,30,36] to address contention and performance fairness. However, these solutions are not readily deployable and high cost may hinder their adoption in production.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers recently have started to explore using code transformations and restructuring to improve cache sharing and reduce contention on multicores [14,15,32,37]. Most such research focuses on compilation techniques to improve cache sharing for a multi-threaded application.…”
Section: Related Workmentioning
confidence: 99%
“…These prior data locality optimizations do not consider target multicore cache hierarchies explicitly. Although the work in [28] also takes into account on-chip cache hierarchies, the vertical and horizontal reuses are exploited in separate steps. Compared to [28], our proposed strategy conducts an integrated mapping and scheduling for computation blocks, which maximizes the vertical and horizontal reuses at each schedule step at the same time.…”
Section: Related Workmentioning
confidence: 99%