2015 IEEE International Conference on Big Data (Big Data) 2015
DOI: 10.1109/bigdata.2015.7363748
|View full text |Cite
|
Sign up to set email alerts
|

Energy-efficient acceleration of big data analytics applications using FPGAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 48 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…4) Big Data Analysis acceleration: In [34], a platform for energy-efficient acceleration of big data analytics applications using FPGAs is presented by GMU and UCLA. The proposed scheme analyzes data mining and machine learning algorithms, utilized extensively in big data applications, in a heterogeneous platform that included both CPUs and FPGAs.…”
Section: B Mapreduce 1) Scalable Mapreduce Acceleratormentioning
confidence: 99%
“…4) Big Data Analysis acceleration: In [34], a platform for energy-efficient acceleration of big data analytics applications using FPGAs is presented by GMU and UCLA. The proposed scheme analyzes data mining and machine learning algorithms, utilized extensively in big data applications, in a heterogeneous platform that included both CPUs and FPGAs.…”
Section: B Mapreduce 1) Scalable Mapreduce Acceleratormentioning
confidence: 99%
“…Specifically, AXI4 allows a burst of up to 256 data transfer cycles with just a single address phase and, therefore, is used for memory mapped interfaces. In addition, AXI4-Stream allows an unlimited size of burst data and removes the memory address phase requirement altogether; however, due to their lack of address phases, AXI4-Stream interfaces and transfers are not considered to be memory-mapped [18]. Another approach would be the construction of systems that lead to the combination of AXI memory mapped IP and enhance it with AXI4-Stream.…”
Section: Fpga Bitstreammentioning
confidence: 99%
“…In this paper we present a novel approach focusing both on performance and power consumption of the recently released Zynq device, for alternating least squares learning algorithm which is an extension of least squares algorithm, with the second one having somewhat more applications in modern computing. Our prototype cluster is almost identical with the one presented here [14] and here [15], except from the fact that we used the Pynq Boards instead of ZedBoards and Apache Spark, instead of Hadoop. Furthermore in our case the bitstream can be downloaded at runtime, as long as it is stored in the boards SD card, using a module that comes with Pynq's image.…”
Section: Related Workmentioning
confidence: 99%