The context of this work is performance evaluation of IT systems based on load testing. It typically consists in generating a flow of requests on a system under test, and to measure response times, request throughput, or computing resource usage. A quick overview of available load testing platforms shows that there exist hundreds of such platforms, including in the open source domain. However, many testers still tend to develop their own ad hoc load testing tooling. Why? This paper starts by looking for possible answers to this question, in order to introduce the CLIF load injection framework, which intends not to be yet another load testing platform. Based on the Fractal component model, the CLIF open source project aims at addressing key issues such as flexibility, adaptation, and scalability. We give here details about CLIF's architecture and associated tools as well as some feedback from a bunch of practical utilizations.
International audienceThis paper advocates for the introduction of performance awareness in autonomic systems. The motivation is to be able to predict the performance of a target configuration when a self-* feature is planning a system reconfiguration. We propose a global and partially automated process based on queues and queuing networks models. This process includes decomposing a distributed application into black boxes, identifying the queue model for each black box and assembling these models into a queuing network according to the candidate target configuration. Finally, performance prediction is performed either through simulation or analysis. This paper sketches the global process and focuses on the black box model identification step. This step is automated thanks to a load testing platform enhanced with a workload control loop. Model identification is then based on statistical tests. The model identification process is illustrated by experimental results
20 pages - Online version to be included in an issueInternational audienceLoad testing of applications is an important and costly activity for software provider companies. Classical solutions are very difficult to set up statically, and their cost is prohibitive in terms of both human and hardware resources. Virtualized cloud computing platforms provide new opportunities for stressing an application's scalability, by providing a large range of flexible and less expensive (pay-per-use model) computation units. On the basis of these advantages, load testing solutions could be provided on demand in the cloud. This paper describes a Benchmark-as-a-Service solution that automatically scales the load injection platform and facilitates its setup according to load profiles. Our approach is based on: (i) virtualization of the benchmarking platform to create self-scaling injectors; (ii) online calibration to characterize the injector's capacity and impact on the benched application; and (iii) a provisioning solution to appropriately scale the load injection platform ahead of time. We also report experiments on a benchmark illustrating the benefits of this system in terms of cost and resource reductions. Copyright © 2013 John Wiley & Sons, Ltd
Part 4: ServicesInternational audienceSoftware applications providers have always been required to perform load testing prior to launching new applications. This crucial test phase is expensive in human and hardware terms, and the solutions generally used would benefit from further development. In particular, designing an appropriate load profile to stress an application is difficult and must be done carefully to avoid skewed testing. In addition, static testing platforms are exceedingly complex to set up. New opportunities to ease load testing solutions are becoming available thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based on: (i) intelligent generation of traffic to the benched application without inducing thrashing (avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system. This platform was found to reduce the cost of testing by 50% compared to more commonly used solutions. It was experimented on the reference JEE benchmark RUBiS. This involved detecting bottleneck tiers
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.