2017
DOI: 10.1007/978-3-319-57972-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Hash-Based Query Processing Operations on FPGAs by a Hash Table Caching Technique

Abstract: Abstract. Extracting valuable information from the rapidly growing field of Big Data faces serious performance constraints, especially in the softwarebased database management systems (DBMS). In a query processing system, hash-based computational primitives such as the hash join and the group-by are the most time-consuming operations, as they frequently need to access the hash table on the high-latency off-chip memories and also to traverse whole the table. Subsequently, the hash collision is an inherent issue… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…However, the latency of accessing main memory is much larger than that of accessing BRAMs. One way to deal with this problem is to use the BRAMs as a cache [110]. However, to achieve high throughput, an efficient caching mechanism is required.…”
Section: Hash Joinmentioning
confidence: 99%
“…However, the latency of accessing main memory is much larger than that of accessing BRAMs. One way to deal with this problem is to use the BRAMs as a cache [110]. However, to achieve high throughput, an efficient caching mechanism is required.…”
Section: Hash Joinmentioning
confidence: 99%
“…Among these, FPGAs are rapidly becoming popular and are expected to be used in 33% of modern data centers by 2020 [28]. This increase in the popularity of FPGAs is attributed to their power-efficiency compared to GPUs, their flexibility compared to ASICs, and recent advances in High-Level Synthesis (HLS) tools that significantly facilitate easier mapping of applications on FP-GAs [84,114,92,93,94,82,6]. Hence, major companies, such as Amazon [44] (with EC2 F1 cloud) and Microsoft [29] (with Brainwave project), have made large investments in FPGA-based CNN accelerators.…”
Section: Introductionmentioning
confidence: 99%
“…Among these, FPGAs are rapidly becoming popular and are expected to be used in 33% of modern data centers by 2020 [28]. This increase in the popularity of FPGAs is attributed to their power-efficiency compared to GPUs, their flexibility compared to ASICs, and recent advances in High-Level Synthesis (HLS) tools that significantly facilitate easier mapping of applications on FP-GAs [84,114,92,93,94,82,6]. Hence, major companies, such as Amazon [44] (with EC2 F1 cloud) and Microsoft [29] (with Brainwave project), have made large investments in FPGA-based CNN accelerators.…”
Section: Introductionmentioning
confidence: 99%