Proceedings of the SC '23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analys 2023
DOI: 10.1145/3624062.3624287
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Data Locality-Aware Job Allocation

Ana Markovic,
Dimitris Kolovos,
Leandro Soares Indrusiak

Abstract: Scheduling tasks close to their associated data is crucial in distributed systems to minimize network traffic and latency. Some Big Data frameworks like Apache Spark employ locality functions and job allocation algorithms to minimize network traffic and execution times. However, these frameworks rely on centralized mechanisms, where the master node determines data locality by allocating tasks to available workers with minimal data transfer time, ignoring variances in worker configurations and availability. To … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
(11 reference statements)
0
1
0
Order By: Relevance
“…In the context of edge and fog computing, where resource constraints and network variability pose significant challenges, Apache Spark offers several features to overcome these obstacles. One notable feature is its ability to optimize data locality, minimizing data transfer across the network [53]. Furthermore, Spark's flexibility in terms of deployment in various types of infrastructure, including edge devices and fog nodes, makes it adaptable to diverse computing environments [46].…”
Section: Batch Processingmentioning
confidence: 99%
“…In the context of edge and fog computing, where resource constraints and network variability pose significant challenges, Apache Spark offers several features to overcome these obstacles. One notable feature is its ability to optimize data locality, minimizing data transfer across the network [53]. Furthermore, Spark's flexibility in terms of deployment in various types of infrastructure, including edge devices and fog nodes, makes it adaptable to diverse computing environments [46].…”
Section: Batch Processingmentioning
confidence: 99%