Abstract. Security metrics have been proposed to assess the security of software applications based on the principles of "reduce attack surface" and "grant least privilege." While these metrics can help inform the developer in choosing designs that provide better security, they cannot on their own show exactly how to make an application more secure. Even if they could, the onerous task of updating the software to improve its security is left to the developer. In this paper we present an approach to automated improvement of software security based on search-based refactoring. We use the search-based refactoring platform, Code-Imp, to refactor the code in a fully-automated fashion. The fitness function used to guide the search is based on a number of software security metrics. The purpose is to improve the security of the software immediately prior to its release and deployment. To test the value of this approach we apply it to an industrial banking application that has a strong security dimension, namely Wife. The results show an average improvement of 27.5% in the metrics examined. A more detailed analysis reveals that 15.5% of metric improvement results in real improvement in program security, while the remaining 12% of metric improvement is attributable to hitherto undocumented weaknesses in the security metrics themselves.
Abstract-Performance evaluation through regression testing is an important step in the software production process. It aims to make sure that the performance of new releases do not regress under a field-like load. The main outputs of regression tests are the metrics that represent the response time of various transactions as well as the resource utilization (CPU, disk I/O and Network). In this paper, we propose to use a concept known as Transaction Profile, which can provide a detailed representation for the transaction in a load independent manner, to detect anomalies through performance test runs. The approach uses data readily available in performance regression tests and a queueing network model of the system under test to infer the Transactions Profiles. Our initial results show that the Transactions Profiles calculated from load regression test data uncover the performance impact of any update to the software. Therefore we conclude that using Transactions Profiles is an effective approach to allow testing teams to easily assure each new software release does not suffer performance regression.
Summary As part of the process to test a new release of an application, the performance testing team need to confirm that the existing functionalities do not perform worse than those in the previous release, a problem known as performance regression anomaly. Most existing approaches to analyse performance regression testing data vary according to the applied workload, which usually leads to the need for an extra performance testing run. To ease such lengthy tasks, we propose a new workload‐independent, automated technique to detect anomalies in performance regression testing data using the concept known as transaction profile (TP). The TP is inferred from the performance regression testing data along with the queueing network model of the testing system. Based on a case study conducted against two web applications, one open source and one industrial, we have been able to automatically generate the ‘TP run report’ and verify that it can be used to uncover performance regression anomalies caused by software updates. In particular, the report helped us to isolate the real anomaly issues from those caused by workload changes with an average F1 measure of 85% for the open source application and 90% for the industrial application. Such results support our proposal to use the TP as a more efficient technique in identifying performance regression anomalies than the state of the art industry and research techniques. Copyright © 2015 John Wiley & Sons, Ltd.
Performance regression testing is an important step in the production process of enterprise applications. Yet, analysing this type of testing data is mainly conducted manually and depends on the load applied during the test. To ease such a manual task we present an automated, load-independent technique to detect performance regression anomalies based on the analysis of performance testing data using a concept known as Transaction Profile. The approach can be automated and it utilises data already available to the performance testing along with the queueing network model of the testing system.The presented "Transaction Profile Run Report" was able to automatically catch performance regression anomalies caused by software changes and isolate them from those caused by load variations with a precision of 80% in a case study conducted against an open source application. Hence, by deploying our system, the testing teams are able to detect performance regression anomalies by avoiding the manual approach and eliminating the need to do extra runs with varying load.
Using modelling to predict the performance characteristics of software applications typically uses Queueing Network Models representing the various system hardware resources. Leaving out the software resources, such as the limited number of threads, in such models leads to a reduced prediction accuracy. Accounting for Software Contention is a challenging task as existing techniques to model software components are complex and require deep knowledge of the software architecture. Furthermore, they also require complex measurement processes to obtain the model's service demands. In addition, solving the resultant model usually require simulation solvers which are often time consuming.In this work, we aim to provide a simpler model for threetier web software systems which accounts for Software Contention that can be solved by time efficient analytical solvers. We achieve this by expanding the existing "Two-Level Iterative Queuing Modelling of Software Contention" method to handle the number of threads at the Application Server tier and the number of Data Sources at the Database Server tier. This is done in a generic manner to allow for extending the solution to other software components like memory and critical sections. Initial results show that our technique clearly outperforms existing techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.