2016 IEEE International Conference on Cloud Engineering (IC2E) 2016
DOI: 10.1109/ic2e.2016.11
|View full text |Cite
|
Sign up to set email alerts
|

Autoscaling for Hadoop Clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…Few works for auto-scaling in the cloud are dedicated to database querying. There are some works that proposed autoscaling solutions for databases in the cloud but they focused on specific technologies, for example, MongoDB (Huang et al, 2013) or Hadoop (Gandhi et al, 2016). These works made performances (not monetary) metrics in their proposals.…”
Section: Discussionmentioning
confidence: 99%
“…Few works for auto-scaling in the cloud are dedicated to database querying. There are some works that proposed autoscaling solutions for databases in the cloud but they focused on specific technologies, for example, MongoDB (Huang et al, 2013) or Hadoop (Gandhi et al, 2016). These works made performances (not monetary) metrics in their proposals.…”
Section: Discussionmentioning
confidence: 99%
“…In a cloud computing environment, many techniques have been developed for providing runtime guarantees for database queries [12, 52, 53, 78] and MapReduce-like jobs [14, 16, 21, 24, 31, 56, 72, 73], e.g., by estimating the number of computers that need to be allocated to a query/job for it to be likely to finish by a user-specified deadline. For the query/job, these techniques typically require using statistics collected from either its prior executions or first running it on many sample instances of the input data, do not continuously refine its estimated execution cost, and are not specifically designed and suitable for machine learning model building and data mining algorithm execution.…”
Section: Related Workmentioning
confidence: 99%
“…Gandhi et al [7] define a fine-grained model for Hadoop/MapReduce. The computation model is approximated by second-order polynomials and, thus, would not be accurate if the workload has a higher-order time complexity, such as deep learning.…”
Section: Related Workmentioning
confidence: 99%