2023
DOI: 10.22266/ijies2023.0630.41
|View full text |Cite
|
Sign up to set email alerts
|

Scheduling of Jobs Allocation for Apache Spark Using Kubernetes for Efficient Execution of Big Data Application

Abstract: The use of cloud services is in high demand due to their high storage and computing capacity. Apache spark provides an open deployment framework for data storage and computation using cluster computing. The specified spark core scheduler uses FIFO to manage job execution in batches. However, it may not be suitable for large-scale clusters due to the unevenness in managing resource allocation between different types of applications. Because of this, most of the work executors are still underutilized and resourc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 31 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?