2022
DOI: 10.1186/s13677-022-00322-5
|View full text |Cite
|
Sign up to set email alerts
|

CLQLMRS: improving cache locality in MapReduce job scheduling using Q-learning

Abstract: Scheduling of MapReduce jobs is an integral part of Hadoop and effective job scheduling has a direct impact on Hadoop performance. Data locality is one of the most important factors to be considered in order to improve efficiency, as it affects data transmission through the system. A number of researchers have suggested approaches for improving data locality, but few have considered cache locality. In this paper, we present a state-of-the-art job scheduler, CLQLMRS (Cache Locality with Q-Learning in MapReduce … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…They also propose a delay capacity scheduling algorithm to ensure that most tasks can achieve localization and speed up job completion time. The researchers in [13] developed a novel job scheduler, CLQLMRS, using reinforcement learning to improve data and cache locality in MapReduce job scheduling, highlighting the importance of reducing job execution time for enhancing Hadoop performance. The limitations of the study include the need to train the scheduling policy, which may be challenging in environments with rapid changes, potentially hindering timely retraining.…”
Section: Hadoop Yarn Scheduling Challenges In Resource-constrained Cl...mentioning
confidence: 99%
See 1 more Smart Citation
“…They also propose a delay capacity scheduling algorithm to ensure that most tasks can achieve localization and speed up job completion time. The researchers in [13] developed a novel job scheduler, CLQLMRS, using reinforcement learning to improve data and cache locality in MapReduce job scheduling, highlighting the importance of reducing job execution time for enhancing Hadoop performance. The limitations of the study include the need to train the scheduling policy, which may be challenging in environments with rapid changes, potentially hindering timely retraining.…”
Section: Hadoop Yarn Scheduling Challenges In Resource-constrained Cl...mentioning
confidence: 99%
“…The researchers in [13] discuss the development of a novel job scheduler, CLQLMRS, using reinforcement learning to improve data and cache locality in MapReduce job scheduling, highlighting the importance of reducing job execution time for enhancing Hadoop performance. In [14], the authors propose a DQ-DCWS algorithm to balance data locality and delays in Hadoop while considering five Quality of Service factors.…”
Section: Introductionmentioning
confidence: 99%
“…They also propose a delay capacity scheduling algorithm to ensure that most tasks can achieve localization and speed up job completion time. The researchers in [14] developed a novel job scheduler, CLQLMRS, using reinforcement learning to improve data and cache locality in MapReduce job scheduling, highlighting the importance of reducing job execution time for enhancing Hadoop performance. The limitations of the study include the need to train the scheduling policy, which may be challenging in environments with rapid changes, potentially hindering timely retraining.…”
Section: Hadoop Yarn Scheduling Challenges In Resource Constrained Cl...mentioning
confidence: 99%
“…The researchers in [14] discuss the development of a novel job scheduler, CLQLMRS, using reinforcement learning to improve data and cache locality in MapReduce job scheduling, highlighting the importance of reducing job execution time for enhancing Hadoop performance. In [15], the authors propose a DQ-DCWS algorithm to balance data locality and delays in Hadoop while considering five Quality of Service factors.…”
Section: Introductionmentioning
confidence: 99%