2016
DOI: 10.1007/s10586-016-0625-2
|View full text |Cite
|
Sign up to set email alerts
|

MrHeter: improving MapReduce performance in heterogeneous environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…As data are transferred in the copy and shuffle stages, they have the most significant impact at execution time. There is an assigned weight for each stage, which is the ratio of the stage's execution time to the total execution time [28,29,32,33,34,35,36]. It is also possible to calculate errors by comparing the estimated weights by a real one [15].…”
Section: Introductionmentioning
confidence: 99%
“…As data are transferred in the copy and shuffle stages, they have the most significant impact at execution time. There is an assigned weight for each stage, which is the ratio of the stage's execution time to the total execution time [28,29,32,33,34,35,36]. It is also possible to calculate errors by comparing the estimated weights by a real one [15].…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, performance parameters and utilization of the available nodes can affect the overall task processing. In [28] some of the issues related to MapReduce performance in distributed environments are addressed in heterogeneous clusters, with authors focusing on the unreasonable allocation of tasks to the nodes with different computational capabilities to prove that such optimization brings significant benefits and greatly improves the efficiency of MapReduce-based algorithms. Authors in [29] solve the problem of task assignment and resource allocation in distributed systems using a genetic algorithm.…”
Section: Optimization Of Task Assignment In Distributed Environmentsmentioning
confidence: 99%
“…Default Hadoop places data uniformly inside the cluster assuming that all nodes are homogeneous [6].…”
Section: Introductionmentioning
confidence: 99%