2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2016
DOI: 10.1109/ipdps.2016.105
|View full text |Cite
|
Sign up to set email alerts
|

MEMTUNE: Dynamic Memory Management for In-Memory Data Analytic Platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(38 citation statements)
references
References 14 publications
0
37
0
1
Order By: Relevance
“…However, it does not considers tuning any parameters and their effect on duration. MEMTune [31] is able to determine the memory of Spark's executors by changing dynamically the size of both the JVM and the RDD cache. It also prefetchs data that is going to be used in future stages and evicts data blocks that are not going to be used.…”
Section: Related Workmentioning
confidence: 99%
“…However, it does not considers tuning any parameters and their effect on duration. MEMTune [31] is able to determine the memory of Spark's executors by changing dynamically the size of both the JVM and the RDD cache. It also prefetchs data that is going to be used in future stages and evicts data blocks that are not going to be used.…”
Section: Related Workmentioning
confidence: 99%
“…Megastore [44] offers a distributed storage system with strong consistency guarantees and high availability for interactive online applications. EAD [45] and MemTune [46] are dynamic memory managers based on workload memory demand and inmemory data cache needs.…”
Section: Spark Implementationmentioning
confidence: 99%
“…Despite the significant performance impact of memory caches, cache management remains a relatively unchartered territory in data parallel systems. Prevalent parallel frameworks (e.g., Spark [2], Tez [7], and Tachyon [4]) simply employ LRU to manage cached data on cluster machines, which results in a significant performance loss [3], [24].…”
Section: Related Workmentioning
confidence: 99%
“…To our knowledge, the recently proposed MemTune [24] is the only caching system that leverages the application semantics. MemTune dynamically adjusts the memory share for task computation and data caching in Spark and evicts/prefetches data as needed.…”
Section: Related Workmentioning
confidence: 99%