2000
DOI: 10.1007/s007780000031
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing database architecture for the new bottleneck: memory access

Abstract: Abstract. In the past decade, advances in the speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture, in terms of both data structures and algorithms. We discuss how vertically fragmented data structures… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
63
0
1

Year Published

2001
2001
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 118 publications
(65 citation statements)
references
References 11 publications
0
63
0
1
Order By: Relevance
“…[9,17,19,26,27]. It maps the value domain of one or more columns to a contiguous integer range [7,9,10,12,18,20]. This mapping replaces column values with unique integer codes and is stored in a separate data structure, the dictionary, which supports two access methods:…”
Section: Dictionaries In Column Storesmentioning
confidence: 99%
See 1 more Smart Citation
“…[9,17,19,26,27]. It maps the value domain of one or more columns to a contiguous integer range [7,9,10,12,18,20]. This mapping replaces column values with unique integer codes and is stored in a separate data structure, the dictionary, which supports two access methods:…”
Section: Dictionaries In Column Storesmentioning
confidence: 99%
“…For the size range 1MB-2GB, we observe a significant runtime increase when the dictionary outgrows the last level cache (25MB). The increase is caused by main memory accesses (details in Section 2), a known problem for index joins [32] and main memory database systems in general [5,20].…”
Section: Introductionmentioning
confidence: 99%
“…Hence, different kinds of processing devices use an encoding optimized for the respective device. For example, a CPU encoding has to support effective caching to reduce the memory access cost [41], whereas a GPU encoding has to ensure coalesced memory access of threads to achieve maximal performance [45]. This usually requires transcoding data before or after the data transfer, which is an additional overhead that can break performance.…”
Section: Functional Propertiesmentioning
confidence: 99%
“…MonetDB pioneered this trend [17] , introducing the radix partitioned join algorithm, and pointing out the potential of optimizing for cache and memory. This style of partitioning has been used in many subsequent papers, including [11] and [5] .…”
Section: Related Workmentioning
confidence: 99%