Context] A variety of testing tools have been developed to support and automate software performance testing activities. These tools may use di↵erent techniques, such as Model-Based Testing (MBT) or Capture and Replay (CR).[Goal] For software companies, it is important to evaluate such tools w.r.t. the e↵ort required for creating test artifacts using them; despite its importance, there are few empirical studies comparing performance testing tools, specially tools developed with di↵erent approaches.[Method] We are conducting experimental studies to provide evidence about the required e↵ort to use CR-based tools and MBT tools. In this paper, we present our first results, evaluating the e↵ort (time spent) when using LoadRunner and Visual Studio CRbased tools, and the PLeTsPerf MBT tool to create performance test scripts and scenarios to test Web applications, in the context of a collaboration project between Software Engineering Research Center at PUCRS and a technological laboratory of a global IT company.[Results] Our results indicate that, for simple testing tasks, the e↵ort of using a CR-based tool was lower than using an MBT tool, but as the testing complexity increases tasks, the advantage of using MBT grows significantly.[Conclusions] To conclude, we discuss the lessons we learned from the design, operation, and analysis of our empirical experiment.
Abstract-Before placing a software system into production, it is necessary to guarantee it provides users with a certain level of Quality-of-Service. Intensive performance testing is then necessary to achieve such a level and the tests require an isolated computing environment. Virtualization can therefore play an important role for saving energy costs by reducing the number of servers required to run performance tests and for allowing performance isolation when executing multiple tests in the same computing infrastructure. Load generation is an important component in performance testing as it simulates users interacting with the target application. This paper presents our experience in using a virtualized environment for load generation aimed at performance testing. We measured several performance metrics and varied system load, number of virtual machines per physical resource, and the CPU pinning schema for comparison of virtual and physical machines. The two main findings from our experiments are that physical machines produced steadier and faster response times under heavy load and that the pinning schema is an important aspect when setting up a virtualized environment for load generation.
Performance testing is a highly specialized task, since it requires that a performance engineer knows the application to be tested, its usage profile, and the infrastructure where it will execute. Moreover, it requires that testing teams expend a considerable effort and time on its automation. In this paper, we present the PLeTsPerf, a model-based performance testing tool to support the automatic generation of scenarios and scripts from application models. PLetsPerf is a mature tool, developed in collaboration with an IT company, which has been used in several works, experimental studies and pilot studies. We present an example of use to demonstrate the process of generating test scripts and scenarios from UML models to test a Web application. We also present the lessons learned and discuss our conclusions about the use of the tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.