1998
DOI: 10.1145/301589.286865
|View full text |Cite
|
Sign up to set email alerts
|

Using generational garbage collection to implement cache-conscious data placement

Abstract: The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointer-manipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointer-based data structures are reorganized to improve cache locality. This pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
62
0

Year Published

2001
2001
2016
2016

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(62 citation statements)
references
References 30 publications
0
62
0
Order By: Relevance
“…implemented cache specific techniques such as compression and line coloring for LDS nodes in a memory allocator [3] and a generational garbage collector [4]. One of the earliest software-controlled LDS prefetching scheme was SPAID of Lipasti et.…”
Section: Related Workmentioning
confidence: 99%
“…implemented cache specific techniques such as compression and line coloring for LDS nodes in a memory allocator [3] and a generational garbage collector [4]. One of the earliest software-controlled LDS prefetching scheme was SPAID of Lipasti et.…”
Section: Related Workmentioning
confidence: 99%
“…For the modulo relation we have: {(0, 1)}<8> ∪ {(0, 1)}<18> ⊆ {(0, 1)}<2>, which implies that the program cannot be parallelized (as iterations 0 and 1 modulo 2 are executed on the same thread). However, with the step relation we get: {(0, 1)}|8> ∪ {(0, 1)}|18> ⊆ S|18>, where S = {(0, 1), (8,9), (16,17), (6,7), (14,15), (4,5), (12,13), (2,3), (10,11)}. In this case we can run 9 concurrent threads while satisfying the observed dependencies: …”
Section: Set-congruence Algebra (In Z × Z)mentioning
confidence: 99%
“…Since dynamic analysis looks directly at the accessed address, our model is simpler (non-relational flavor) while covering the exploitable cases. Second, our dynamic analysis may be applied to richer containers (linked lists, trees) than (only) arrays as long as the memory has a regular structure; Chilimbi and Larus's work [3,4] improves cache behavior by re-organizing memory to a similar regular structure that also facilitates our dynamic analysis.…”
Section: Introductionmentioning
confidence: 99%
“…Operating systems may employ the use of variable memory page sizes [27,30] to adapt to the memory access patterns of the application. Another example is memory management in Java Virtual Machines (JVM), where the garbage collector's activities are adapted to the application's behavior [7,23]. Just-In-Time compilers, such as the ones found in JVMs [1,2], and dynamic optimization frameworks [4,19] target adaptation of the executing code itself in order to exploit dynamic program characteristics.…”
Section: Introductionmentioning
confidence: 99%