2020
DOI: 10.14778/3384345.3384356
|View full text |Cite
|
Sign up to set email alerts
|

Enabling low tail latency on multicore key-value stores

Abstract: Modern applications employ key-value stores (KVS) in at least some point of their software stack, often as a caching system or a storage manager. Many of these applications also require a high degree of responsiveness and performance predictability. However, most KVS have similar design decisions which focus on improving throughput metrics, at times by sacrificing latency. While latency can be occasionally reduced by over provisioning hardware, this entails significant increase in costs. In this paper we prese… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…Similarly, the latest state-of-the-art, key-value stores from academia, such as [18][19][20][31][32][33], either concentrate on a single-node setup or -like the industry ones above-use hash functions to determine where to place data. This kind of placement does not take into account the delay distribution of the host cluster they are working on or the access pattern of the data to store, which is crucial to optimize the access latency.…”
Section: Popular Key-value Storesmentioning
confidence: 99%
“…Similarly, the latest state-of-the-art, key-value stores from academia, such as [18][19][20][31][32][33], either concentrate on a single-node setup or -like the industry ones above-use hash functions to determine where to place data. This kind of placement does not take into account the delay distribution of the host cluster they are working on or the access pattern of the data to store, which is crucial to optimize the access latency.…”
Section: Popular Key-value Storesmentioning
confidence: 99%
“…Specially crafted real-time databases [27], novel scheduling algorithms in scheduling/routing queries [24,25,45], transactional concepts [5], or query evaluation strategies [19,51], work from inside the database. Careful tailoring of the whole software stack from OS kernel to DB engine [30,33], or crafting dedicated operating systems [4,[20][21][22]36,37] to leverage the advantages of modern hardware in database system engineering (e.g., [15,29]), contribute to solutions from below the database.…”
Section: Related Workmentioning
confidence: 99%
“…It has been reported that database-internal garbage collection [2,29] can also cause latency spikes, which might however be seen as part of productive operation. Our work considers the effects of garbage collection inside the test harness, rather than the database engine.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…As soon as the volatile part is full and compaction to an SSTable starts, the PMem replication is used for concurrent queries. A recent proposal for a modern key-value store is RStore [20]. It can be summarized as log-structured storage plus index.…”
Section: Pmem-aware Storage Enginesmentioning
confidence: 99%