2015 IEEE 31st International Conference on Data Engineering 2015
DOI: 10.1109/icde.2015.7113375
|View full text |Cite
|
Sign up to set email alerts
|

“Anti-Caching”-based elastic memory management for Big Data

Abstract: The increase in the capacity of main memory coupled with the decrease in cost has fueled the development of inmemory database systems that manage data entirely in memory, thereby eliminating the disk I/O bottleneck. However, as we shall explain, in the Big Data era, maintaining all data in memory is impossible, and even unnecessary. Ideally we would like to have the high access speed of memory, with the large capacity and low price of disk. This hinges on the ability to effectively utilize both the main memory… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(17 citation statements)
references
References 30 publications
0
17
0
Order By: Relevance
“…Our problem can be considered a variation of the anti-caching problem, applied in microblogs platforms rather than relational main-memory databases. How ever, existing techniques [8,15,30] have limitation to solve our problem. First, none of them addresses top-k queries, which is a maj or component of microblogs systems [5,19,28].…”
Section: Related Workmentioning
confidence: 98%
See 3 more Smart Citations
“…Our problem can be considered a variation of the anti-caching problem, applied in microblogs platforms rather than relational main-memory databases. How ever, existing techniques [8,15,30] have limitation to solve our problem. First, none of them addresses top-k queries, which is a maj or component of microblogs systems [5,19,28].…”
Section: Related Workmentioning
confidence: 98%
“…Though, it still uses a traditional policy (LRU) that requires tracking usage of individual data items. This pose a significant overhead to maintain LRU-ordered list for all data items in the system [30] . Unlike this work, kFlushing uses top-k queries as a guide to smartly select flushing victims with minimal overhead that does not limit system scalability.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…We choose these two sets to investigate three scenarios: (i) both servers have enough memory, (ii) Xeon has enough memory while ARM does not, (iii) both servers do not have enough memory. These scenarios cover memory management issues in modern database sys-tems [34]. The join query is both memory and I/O bounded, as the smaller table is usually used to build an in-memory structure and the other table is scanned once from the storage.…”
Section: Sharkmentioning
confidence: 99%