Proceedings of the International Conference on Supercomputing 2017
DOI: 10.1145/3079079.3079080
|View full text |Cite
|
Sign up to set email alerts
|

Efficient SIMD and MIMD parallelization of hash-based aggregation by conflict mitigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…For example, seismic [63], stencil [64], [65], electromagnetic [66], molecular dynamics [67], Fast Multipole Methods [68], tensors [39], deep learning [69], [70], databases [49], [71], [72], big data [73], systems and graph engines [74], and many more.…”
Section: State-of-the-art Shared-memory Optimizationsmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, seismic [63], stencil [64], [65], electromagnetic [66], molecular dynamics [67], Fast Multipole Methods [68], tensors [39], deep learning [69], [70], databases [49], [71], [72], big data [73], systems and graph engines [74], and many more.…”
Section: State-of-the-art Shared-memory Optimizationsmentioning
confidence: 99%
“…Similarly, MapReduce framework is optimized for KNC architecture in [73], in which the thread-level parallelism is leveraged. Vectorization through explicit SIMDization is explored in many database primitive operations including hashing [71], [72], as well as sequential scans, aggregation, index operations, and joins [49].…”
Section: State-of-the-art Shared-memory Optimizationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Data processing and database algorithms fit particularly well into this pattern. Their behavior is often highly predictable; and there are many examples that show how the awareness of resources and their uses in database code can significantly improve performance [1,14,15,18,26,31,34]. Fig.…”
Section: Task Model: Mxtasksmentioning
confidence: 99%
“…The first step is an optional local aggregation where data is aggregated locally, followed by a second step where data is repartitioned and transferred to the final destination node for aggregation [45,14]. The local aggregation can reduce the amount of data transferred in the second step for algebraic aggregations, as tuples with the same GROUP BY key are aggregated to a single tuple during local aggregation [6,52,22,35,48]. Local aggregation works effectively for low-cardinality domains, such as age, sex or country, where data can be reduced substantially and make the cost of the repartition step negligible.…”
Section: Introductionmentioning
confidence: 99%