Many recent studies have shown that various multi-objective evolutionary algorithms have been widely applied in the field of search-based software engineering (SBSE) for optimal solutions. Most of them either focused on solving newly re-formulated problems or on proposing new approaches, while a number of studies performed reviews and comparative studies on the performance of proposed algorithms. To evaluate such performance, it is necessary to consider a number of performance metrics that play important roles during the evaluation and comparison of investigated algorithms based on their best-simulated results. While there are hundreds of performance metrics in the literature that can quantify in performing such tasks, there is a lack of systematic review conducted to provide evidence of using these performance metrics, particularly in the software engineering problem domain. In this paper, we aimed to review and quantify the type of performance metrics, number of objectives, and applied areas in software engineering that reported in primary studies—this will eventually lead to inspiring the SBSE community to further explore such approaches in depth. To perform this task, a formal systematic review protocol was applied for planning, searching, and extracting the desired elements from the studies. After considering all the relevant inclusion and exclusion criteria for the searching process, 105 relevant articles were identified from the targeted online databases as scientific evidence to answer the eight research questions. The preliminary results show that remarkable studies were reported without considering performance metrics for the purpose of algorithm evaluation. Based on the 27 performance metrics that were identified, hypervolume, inverted generational distance, generational distance, and hypercube-based diversity metrics appear to be widely adopted in most of the studies in software requirements engineering, software design, software project management, software testing, and software verification. Additionally, there are increasing interest in the community in re-formulating many objective problems with more than three objectives, yet, currently are dominated in re-formulating two to three objectives.
Regression testing is an important task in software development, but it is often associated with high costs and increased project expenses. To address this challenge, prioritizing test cases during test execution is essential as it aims to swiftly identify the hidden faults in the software. In the literature, several techniques for test case prioritization (TCP) have been proposed and evaluated. However, existing weightbased TCP techniques often overlook the true diversity coverage of test cases, resulting in the use of average-based weighting practices and a lack of systematic calculation for test case weights. Our research revolves around prioritizing test cases by considering multiple code coverage criteria. The study presents a novel diversity technique that calculates a diversity coverage score for each test case. This score serves as a weight to effectively rank the test cases. To evaluate the proposed technique, an experiment was conducted using five open-source programs and measured its performance in terms of the average percentage of fault detection (APFD). A comparison was made against an existing technique. The results revealed that the proposed technique significantly improved the fault detection rate compared to the existing approach. It is worth noting that this study is the first of its kind to incorporate the true diversity score of test cases into the TCP process. The findings of our research make valuable contributions to the field of regression testing by enhancing the effectiveness of the testing process through the utilization of diversity-based weighting techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.