Proceedings of the 8th International Conference on Cloud Computing and Services Science 2018
DOI: 10.5220/0006703504400447
|View full text |Cite
|
Sign up to set email alerts
|

PopRing: A Popularity-aware Replica Placement for Distributed Key-Value Store

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…(7) The request is executed by a worker thread, which reads data (in memory, if present in the cache, or on disk). (8) The replica sends data to the coordinator. (9) The coordinator responds to the client.…”
Section: Scheduling In Persistent and Distributed Key-value Storesmentioning
confidence: 99%
See 1 more Smart Citation
“…(7) The request is executed by a worker thread, which reads data (in memory, if present in the cache, or on disk). (8) The replica sends data to the coordinator. (9) The coordinator responds to the client.…”
Section: Scheduling In Persistent and Distributed Key-value Storesmentioning
confidence: 99%
“…It is often observed, for instance, that even if most requests are successfully handled with low latency, some of them suffer from high delays globally impacting responsiveness: this is the famous tail latency problem [10]. The optimization of these metrics in persistent key-value stores has been the subject of extensive research, from the maximization of throughput through careful optimization of internal data structures [25] or CPU overhead minimization [21], to tail latency mitigation through redundant requests [26], popularity-aware data replication [8], or internal I/O scheduling [6]. Among these optimization techniques, the proper scheduling of client requests has received significant attention.…”
Section: Introductionmentioning
confidence: 99%
“…In another previous work, Cavalcante et al improved the load‐balancing for workloads with data access skew. They defined the replica placement as a multi‐objective optimization that takes into account data access distribution, data redundancy and data movement.…”
Section: Related Workmentioning
confidence: 99%
“…Distributed key‐value stores (KVS) are a well‐established approach for cloud data‐intensive applications mainly because they are capable of successfully managing huge data traffic driven by the explosive growth of different applications such as social networks, e‐commerce, and enterprise. In this work, the focus is on a particular type of KVS, also known as Object Store, which can store and serve any type of data (eg, photo, image, and video) .…”
Section: Introductionmentioning
confidence: 99%
“…However, this eligibility constraint of each task to specific machines prevents achieving optimal performance in current systems. Moreover, loads between machines tend to be heterogeneous [8], [9] due to varying popularities between the keys, which constitutes an additional challenge. Finally, requests vary in size and the moment they are performed cannot be predicted precisely, leading to a difficult problem.…”
Section: Introductionmentioning
confidence: 99%