Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data 2015
DOI: 10.1145/2723372.2747645
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking SIMD Vectorization for In-Memory Databases

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
100
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 135 publications
(101 citation statements)
references
References 37 publications
1
100
0
Order By: Relevance
“…As opposed to multi-threading, which enables thread-level parallelism, vectorized instructions enable data-level parallelism, where the degree of parallelism depends on the width of the specialized registers 8 . When working on a data type for which k values fit into // add data object to result set these registers, SIMD offers a theoretical speed-up of k; however, this value is rarely achieved in practice as multiple other factors, such as memory bandwidth and the concrete instruction to perform, play an important role [32]. For instance, AVX instructions, which work on 256-bit SIMD registers, can process eight 32-bit floating-point values in parallel with one instruction and offer a theoretical speed-up of a factor of 8.…”
Section: Vectorized Instructionsmentioning
confidence: 99%
“…As opposed to multi-threading, which enables thread-level parallelism, vectorized instructions enable data-level parallelism, where the degree of parallelism depends on the width of the specialized registers 8 . When working on a data type for which k values fit into // add data object to result set these registers, SIMD offers a theoretical speed-up of k; however, this value is rarely achieved in practice as multiple other factors, such as memory bandwidth and the concrete instruction to perform, play an important role [32]. For instance, AVX instructions, which work on 256-bit SIMD registers, can process eight 32-bit floating-point values in parallel with one instruction and offer a theoretical speed-up of a factor of 8.…”
Section: Vectorized Instructionsmentioning
confidence: 99%
“…A more thorough operator redesign was shown to be required to to fully take advantage of vectorized instructions [6]. e authors used selective load and store and sca er/gather operations available in modern SIMD instruction sets as building blocks for new scan and join operators.…”
Section: Related Workmentioning
confidence: 99%
“…Zhou [16], range indexes [17], Bloom filters [18], hash tables and partitioning used in radixsort and hash joins [19].…”
Section: Related Workmentioning
confidence: 99%