2015
DOI: 10.1016/j.comnet.2015.02.030
|View full text |Cite
|
Sign up to set email alerts
|

Multihybrid job scheduling for fault-tolerant distributed computing in policy-constrained resource networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…At present, job-level scheduling in the hybrid cloud environment has produced noteworthy research results. The research in [4] proposed a fault-tolerant scheduling strategy in Hadoop, which optimizes completion times, maximizes resource utilization, and minimizes the task failure rate. In [5], a new Hadoop scheduling strategy, COSHH, was presented, and it considers heterogeneity at both the application and cluster levels.…”
Section: Introductionmentioning
confidence: 99%
“…At present, job-level scheduling in the hybrid cloud environment has produced noteworthy research results. The research in [4] proposed a fault-tolerant scheduling strategy in Hadoop, which optimizes completion times, maximizes resource utilization, and minimizes the task failure rate. In [5], a new Hadoop scheduling strategy, COSHH, was presented, and it considers heterogeneity at both the application and cluster levels.…”
Section: Introductionmentioning
confidence: 99%
“…According to in Moon and Youn (2015) 70–75% resources have failure rates of around 20 and 40% in workload archives such as DEUB, UCB and SDSC (Kondo et al 2010). Furthermore, their application level traces reveal that most of their resources have more failure probabilities which further cause issues related to performance of scheduling and unavailability of resources (Kondo et al 2010; Li et al 2006).…”
Section: Introductionmentioning
confidence: 99%
“…(12) The skewed partition detection algorithm (SPD) consists of three steps: first, it evaluates Reducer's computational capability based on the Reducer task log information successfully executed on each Reducer; after that, it calculates the load distribution threshold for each Reducer based on the objective of balancing the skewed Reduce input load; finally, it detects whether there is any skewed partition according to the load allocation threshold of each Reducer and the sizes of virtual partitions corresponding to all Map task output data [14][15].…”
Section: Heterogeneity-aware Loads Balancingmentioning
confidence: 99%