Key-value stores are a vital component in many scale-out enterprises, including social networks, online retail, and risk analysis. Accordingly, they are receiving increased attention from the research community in an effort to improve their performance, scalability, reliability, cost, and power consumption. To be effective, such efforts require a detailed understanding of realistic key-value workloads. And yet little is known about these workloads outside of the companies that operate them. This paper aims to address this gap.To this end, we have collected detailed traces from Facebook's Memcached deployment, arguably the world's largest. The traces capture over 284 billion requests from five different Memcached use cases over several days. We analyze the workloads from multiple angles, including: request composition, size, and rate; cache efficacy; temporal patterns; and application use cases. We also propose a simple model of the most representative trace to enable the generation of more realistic synthetic workloads by the community.Our analysis details many characteristics of the caching workload. It also reveals a number of surprises: a GET/SET ratio of 30:1 that is higher than assumed in the literature; some applications of Memcached behave more like persistent storage than a cache; strong locality metrics, such as keys accessed many millions of times a day, do not always suffice for a high hit rate; and there is still room for efficiency and hit rate improvements in Memcached's implementation. Toward the last point, we make several suggestions that address the exposed deficiencies.
Abstract-Data Center Networks present a novel, unique and rich environment for algorithm development and deployment. Projects are underway in the IEEE 802.1 standards body, especially in the Data Center Bridging Task Group, to define new switched Ethernet functions for data center use.One such project is IEEE 802.1Qau, the Congestion Notification project, whose aim is to develop an Ethernet congestion control algorithm for hardware implementation. A major contribution of this paper is the description and analysis of the congestion control algorithm-QCN, for Quantized Congestion Notification-which has been developed for this purpose.A second contribution of the paper is an articulation of the Averaging Principle: a simple method for making congestion control loops stable in the face of increasing lags. This contrasts with two well-known methods of stabilizing control loops as lags increase; namely, (i) increasing the order of the system by sensing and feeding back higher-order derivatives of the state, and (ii) determining the lag and then choosing appropriate loop gains. Both methods have been applied in the congestion control literature to obtain stable algorithms for high bandwidth-delay product paths in the Internet. However, these methods are either undesirable or infeasible in the Ethernet context. The Averaging Principle provides a simple alternative, one which we are able to theoretically characterize.
Abstract-This paper describes Steptacular, an online interactive incentive system for encouraging people to walk more. A trial offering Steptacular to the employees of Accenture-USA was conducted over a 6 month period. Over 5,000 employees registered for the program and close to 3,000 participants wore USB-enabled pedometers; from time to time they plugged their pedometer into a computer to upload hourly step counts to a website; and the website had a range of features to encourage more walking. These features included monetary rewards which were randomly redeemable through a simple game, and a social component. We describe the system and present preliminary findings about the effectiveness of each of these features in encouraging physical activity.
Data Center Networks have recently caused much excitement in the industry and in the research community. They represent the convergence of networking, storage, computing and virtualization. This paper is concerned with the Quantized Congestion Notification (QCN) algorithm, developed for Layer 2 congestion management. QCN has recently been standardized as the IEEE 802.1Qau Ethernet Congestion Notification standard.We provide a stability analysis of QCN, especially in terms of its ability to utilize high capacity links in the shallowbuffered data center network environment. After a brief description of the QCN algorithm, we develop a delay-differential equation model for mathematically characterizing it. We analyze the model using a linearized approximation, obtaining stability margins as a function of algorithm parameters and network operating conditions. A second contribution of the paper is the articulation and analysis of the Averaging Principle (AP)-a new method for stabilizing control loops when lags increase. The AP is distinct from other well-known methods of feedback stabilization such as higher-order state feedback and lag-dependent gain adjustment. It turns out that the QCN and the BIC-TCP (and CUBIC) algorithms use the AP; we show that this enables them to be stable under large lags. The AP is also of independent interest since it applies to general control systems, not just congestion control systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.