2015 International Conference on Field Programmable Technology (FPT) 2015
DOI: 10.1109/fpt.2015.7393148
|View full text |Cite
|
Sign up to set email alerts
|

A co-design approach for accelerated SQL query processing via FPGA-based data filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 9 publications
0
12
0
Order By: Relevance
“…Offloading operations for query processing to FPGAbased hardware accelerators has been researched well in [2,3,20,25,28,31,35], because of its small energy footprint and fast execution. Next to traditional co-processor systems, the "bump-in-the-wire" approach (as applied in ReProVide) has also been of great interest in approaches described in [16,28,32].…”
Section: Related Workmentioning
confidence: 99%
“…Offloading operations for query processing to FPGAbased hardware accelerators has been researched well in [2,3,20,25,28,31,35], because of its small energy footprint and fast execution. Next to traditional co-processor systems, the "bump-in-the-wire" approach (as applied in ReProVide) has also been of great interest in approaches described in [16,28,32].…”
Section: Related Workmentioning
confidence: 99%
“…Static approaches which use FPGAs for query processing [4], [8], [18] succeeded in implementing query engines capable of doing SQL based processing (select, project join, etc.) with a very high throughput.…”
Section: Motivationmentioning
confidence: 99%
“…HDFS and Spark rely on GPPs and consequently inherit from GPP limitations mentioned previously. Regarding hardware acceleration, [4], [8], [14], [15], [18] have shown that Field Programmable Gate Arrays (FPGAs) could be a better alternative to GPPs in the field of database applications in general and for query-execution purposes with an increasing data throughput with lower energy effort. But not all queries can be easily implemented and processed in a FPGA.…”
Section: Introductionmentioning
confidence: 99%
“…However, in the best case, when the read operation of the hash table entry is hit in the cache, (in BRAM) only one cycle is enough to access it. By the way, the hardware-software co-designed bloom-filter-based hash join accelerator is proposed in [23] that can be a complementary solution for the introduced hash table caching technique in this paper. We do not use several hash functions, and we do not run into false positive problems as with the Bloom filter approach.…”
Section: Hash-based Units: Table Join and Groupbymentioning
confidence: 99%
“…In contrast, we utilize almost most of the free BRAMs of the FPGA as a cache to achieve an improved throughput. In [23], as Bloom filters use BRAMs to store the bit vector, the processing of large data tables may be limited by the size of BRAMs.…”
Section: Hash-based Units: Table Join and Groupbymentioning
confidence: 99%