2019
DOI: 10.1186/s40537-019-0210-7
|View full text |Cite
|
Sign up to set email alerts
|

Big data stream analysis: a systematic literature review

Abstract: Advances in information technology have facilitated large volume, high-velocity of data, and the ability to store data continuously leading to several computational challenges. Due to the nature of big data in terms of volume, velocity, variety, variability, veracity, volatility, and value [1] that are being generated recently, big data computing is a new trend for future computing. Big data computing can be generally categorized into two types based on the processing requirements, which are big data batch com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
100
0
6

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 180 publications
(106 citation statements)
references
References 102 publications
0
100
0
6
Order By: Relevance
“…In our current version, "batch" applications are executed in Hadoop MapReduce, and "stream" ones are in Storm. It comes from the safety fact that a single framework for either computing model can avoid the complexity of interoperability [Kolajo et al 2019]. By the allocator of coordinator, the application would be parsed as concrete tasks for allocation further.…”
Section: Architecture and Methodologymentioning
confidence: 99%
“…In our current version, "batch" applications are executed in Hadoop MapReduce, and "stream" ones are in Storm. It comes from the safety fact that a single framework for either computing model can avoid the complexity of interoperability [Kolajo et al 2019]. By the allocator of coordinator, the application would be parsed as concrete tasks for allocation further.…”
Section: Architecture and Methodologymentioning
confidence: 99%
“…Analysis data: utility to the aggregated data. The challenges show impact to storage capacity especially in data collection during generating the raw data which the streaming data treat with virtual memory (Buffer) [7]. On the other hand, the process to generate a dataset with clean data and setting the relationship need preprocess technique [8].…”
Section: Revised Manuscript Received On July 22 2019 * Correspondenmentioning
confidence: 99%
“…The authors in [7] found that scalability, privacy and stack adjusting issues, as well as an observational examination of huge data streams and innovations, are still open to encourage inquire about endeavours. They found that even though, noteworthy investigate endeavours have been directed to the real-time investigation of big data stream not much consideration has been given to the preprocessing arrange of enormous data streams.…”
Section: Fig 1 Data Sequence To Preprocessmentioning
confidence: 99%
“…Stream processing is a continuous process and it does not finish until stopped explicitly. Results of stream processing are readily accessible and will be repeatedly updated as new stream of data enter the system [40]. These systems can process nearly unbounded amount of data.…”
Section: Data Processing In Hadoopmentioning
confidence: 99%