The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2015 IEEE First International Conference on Big Data Computing Service and Applications 2015
DOI: 10.1109/bigdataservice.2015.56
|View full text |Cite
|
Sign up to set email alerts
|

Dynamically Scaling Apache Storm for the Analysis of Streaming Data

Abstract: Stream processing platforms allow applications to analyse incoming data continuously. Several use cases exist that make use of these capabilities, ranging from monitoring of physical infrastructures to pre selecting video surveillance feeds for human inspection. It is difficult to predict how much computing resources are needed for these stream processing platforms, because the volume and velocity of input data may vary over time. The open source Apache Storm software provides a framework for developers to bui… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(21 citation statements)
references
References 13 publications
0
18
0
Order By: Relevance
“…Several solutions, especially those acting on data streams (e.g., load distribution, shedding), perform adaptation with iner granularity, at level of single tuples (e.g., [3,23,50,95,168,185]) or batches of tuples (e.g., [38,177]). Solutions acting at the infrastructure level usually work with the granularity of the computing node (e.g., [47,82,176]) or the network link [5].…”
Section: What: Adaptation Actions and Controlled Entitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Several solutions, especially those acting on data streams (e.g., load distribution, shedding), perform adaptation with iner granularity, at level of single tuples (e.g., [3,23,50,95,168,185]) or batches of tuples (e.g., [38,177]). Solutions acting at the infrastructure level usually work with the granularity of the computing node (e.g., [47,82,176]) or the network link [5].…”
Section: What: Adaptation Actions and Controlled Entitiesmentioning
confidence: 99%
“…Another example is given by Kumbhare et al [100], who use a utility function to combines resource cost and a so-called łapplication valuež that depends on current processing accuracy. Among system-oriented metrics, the most used one is resource utilization (e.g., [20,27,57,60,68,72,91,120,134,152,165,176]), which captures the utilization level of a computing resource, usually CPU. As in diferent application domains, utilization is often used in conjunction with threshold-based adaptation policies, where actions are triggered whenever the utilization level violates a pre-deined threshold value (e.g., [27,57,74]).…”
Section: Metricsmentioning
confidence: 99%
“…To this end, they leverage a resource estimator that predicts the resource consumption based on the current resource utilization. Similarly, Van der Veen et al [138] propose a controller to automatically adjust the number of virtual machines assigned to a deployment of the Storm ESP system.…”
Section: E Icient Resource Provisioningmentioning
confidence: 99%
“…The traditional computing system cannot offer the necessary efficiency and performance. Therefore, the big data industries have seen various platforms such ad Spark [4], Haddoo [5,6] and Strom [7] to entertain the demands of a large amount of big data processing. Apache spark is one of the most widespread frameworks among the prevailing distributes framework, due to its great capability to sustenance heavy applications and for complex data processing performance [2,4].…”
Section: Introductionmentioning
confidence: 99%