Cloud computing is a new paradigm for the delivery of IT services. It has enabled many promising opportunities for features that cannot be easily implemented in traditional IT environments, such as elastic scalability, self-service deployment, resiliency and recovery, and so forth. Benchmarking the cloud requires a welldefined set of cloud performance metrics that should be able to sensitively distinguish the capabilities of cloud systems that enable those features. One way of defining benchmark metrics is based on observations of the internal mechanisms in a cloud. For example, an elasticity evaluation may be based on measuring a resource provisioning interval in the cloud. However, a more meaningful evaluation should be based on user-centric metrics. In this article, we will introduce a set of performance metrics that can be directly measured, calculated and compared by the cloud users, including workload consumers and the users who deploy and manage the workload life cycles. We will also discuss ways to organize the usercentric metrics, with different emphasis, into a benchmark that represents different use cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.