2020
DOI: 10.1007/s42979-020-00182-3
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Optimization Technique for Improvement of Hadoop Performance with a Dynamic Job Execution Method Based on Artificial Neural Network

Abstract: The improvement of Hadoop performance has received considerable attention from researchers in cloud computing fields. Most studies have focused on improving the performance of a Hadoop cluster. Notably, various parameters are required to configure Hadoop and must be adjusted to improve performance. This paper proposes a mechanism to improve Hadoop, schedule jobs, and allocate and utilize resources. Specifically, we present an improved ant colony optimization method to schedule jobs according to the job size an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 27 publications
(33 reference statements)
0
4
0
Order By: Relevance
“…This method takes into account both task size and expected execution time. In addition, they have improved Hadoop performance by integrating an aggregation node into the default architecture of the HDFS distributed file system [11]. Other research has exploited AEML, an acceleration engine specially designed to balance the workload across multiple GPUs.…”
Section: Related Work On Load Balancingmentioning
confidence: 99%
“…This method takes into account both task size and expected execution time. In addition, they have improved Hadoop performance by integrating an aggregation node into the default architecture of the HDFS distributed file system [11]. Other research has exploited AEML, an acceleration engine specially designed to balance the workload across multiple GPUs.…”
Section: Related Work On Load Balancingmentioning
confidence: 99%
“…Based on the energy consumption in each node, a decision is made to launch map and/or reduce tasks. An ACO algorithm is used in [22] to finalize the job execution in a batch based on heterogeneous job size and its expected latency. However, the job taking less data and response time is given high priority in the job schedule.…”
Section: Literature Surveymentioning
confidence: 99%
“…The proposed schedulers were not evaluated in either dynamic or heterogeneous environments. Alanazi et al [20] proposed to give priority to the jobs with the minimum data size and response time for job scheduling. To achieve that, they proposed an artificial neural network to predict resource usage and running jobs by Hadoop data nodes.…”
Section: Related Workmentioning
confidence: 99%