Software testing consumes a significant portion of software effort. Program entities such as branch or definition-use pairs (DUPs) are used in diverse software development tasks. In this study, the authors present a novel evolution-based approach to generating test data for all definition-use coverage. First, the subset of DUPs, which can ensure the coverage adequacy, is computed by a reduction algorithm for the whole DUPs. Then they apply a genetic algorithm to generate test data for the subset of DUPs. Furthermore, the fitness of an individual depends on the matching degree between the traversed path and the definition-clear path of each target DUP. They also investigate the coverage and the size of test cases of test data generation by applying the authors' approach on 15 widely used subject programs. The experimental results show that their approach can reduce the size of test cases that generated without affecting the coverage rate.
Combinatorial interaction testing (CIT), a black-box testing method, has been well studied in recent years. It aims at constructing an effective interaction test suites, so as to identify the faults that are caused by interactions among parameters. After interaction test suites are generated by CIT, the execution order of test cases in the test suite becomes critical due to limited testing resources. To determine test case order, the prioritization of interaction test suites has been employed. As we know, random prioritization (RP) of test cases has been considered as simple but ineffective. Existing research unveils that adaptive random prioritization (ARP) of test cases is an alternative and promising candidate that may replace RP. However, previous ARP techniques may not be used to prioritize interaction test suites due to the lack of source-code-related information in interaction test suite, such as statement coverage, function coverage, or branch coverage. In this paper, we not only propose the ARP strategy in order to prioritize interaction test suites by using interaction coverage information, without the source-code-related information, but also unify the RP strategy and traditional interaction-coverage based prioritization strategy (ICBP). Additionally, simulation studies indicate that the ARP strategy performs better than the RP strategy, test-case-generation prioritization, and reverse test-casegeneration prioritization, and can also be more time-saving than ICBP while greatly maintaining similar, or even better, effectiveness.
Combinatorial interaction testing is a well-studied testing strategy, and has been widely applied in practice. Combinatorial interaction test suite, such as fixed-strength and variable-strength interaction test suite, is widely used for combinatorial interaction testing. Due to constrained testing resources in some applications, for example in combinatorial interaction regression testing, prioritization of combinatorial interaction test suite has been proposed to improve the efficiency of testing. However, nearly all prioritization techniques may only support fixed-strength interaction test suite rather than variable-strength interaction test suite. In this paper, we propose two heuristic methods in order to prioritize variablestrength interaction test suite by taking advantage of its special characteristics. The experimental results show that our methods are more effective for variable-strength interaction test suite by comparing with the technique of prioritizing combinatorial interaction test suites according to test case generation order, the random test prioritization technique, and the fixed-strength interaction test suite prioritization technique. Besides, our methods have additional advantages compared with the prioritization techniques for fixed-strength interaction test suite.
Abstract-With the continuous evolution of software systems, test suites often grow very large. Rerunning all test cases may be impractical in regression testing under limited resources. Coverage-based test case prioritization techniques have been proposed to improve the effectiveness of regression testing. The original test suite often contains some test cases which are designed for exercising production features or exceptional behaviors, rather than for code coverage. Therefore, coverage-based prioritization techniques do not always generate satisfactory results. In this context, we propose a global similarity-based regression test case prioritization approach. The approach reschedules the execution order of test cases based on the distances between pair-wise test cases. We designed and conducted empirical studies on four C programs to validate the effectiveness of our proposed approach. Moreover, we also empirically compared the effects of six similarity measures on the global similarity-based test case prioritization approach. Experimental results illustrate that the global similarity-based regression test case prioritization approach using Euclidean distance is the most effective. This study aims at providing practical guidelines for picking the appropriate similarity measures.
Determining how to select a subset of test cases with high-fault detection capability becomes a key issue in code-level regression testing. Cluster analysis has been proposed to deal with this issue. It partitions test cases into clusters based on the similarity of execution profiles. In previous studies, execution profiles were represented as binary or numeric vectors. The vector model only considers the number of times that a function or statement is executed. However, it ignores sequential, the relations and structural information between function calls. Therefore vector-based methods do not always generate satisfying results. In this study, the authors presented cluster analysis of three different types of structural profiles, that is, function execution sequence, function call sequence (FCS) and function call tree. They designed and conducted empirical studies on five medium-sized programs to validate the effects of different profiles on regression test case reduction. Experimental results illustrate that sequential, call relations and structural information can aid to further improve fault detection effectiveness. In view of cost-effectiveness, FCS is regarded as to be the optimal profile. Furthermore, cluster analysis of FCSs is comparable to the additional branch coverage reduction technique with respect to fault detection effectiveness.
Regression testing is a very time-consuming and expensive testing activity. Many test case prioritization techniques have been proposed to speed up regression testing. Previous studies show that no one technique is always best. Random strategy, as the simplest strategy, is not always so bad. Particularly, when a test suite has higher fault detection capability, the strategy can generate a better result. Nevertheless, due to the randomness, the strategy is not always as satisfactory as expected. In this context, we present a test case prioritization approach using fixed size candidate set adaptive random testing algorithm to reduce the effect of randomness and improve fault detection effectiveness. The distance between pair-wise test cases is assessed by exclusive OR. We designed and conducted empirical studies on eight C programs to validate the effectiveness of the proposed approach. The experimental results, confirmed by a statistical analysis, indicate that the approach we proposed is more effective than random and the total greedy prioritization techniques in terms of fault detection effectiveness. Although the presented approach has comparable fault detection effectiveness to ART-based and the additional greedy techniques, the time cost is much lower. Consequently, the proposed approach is much more cost-effective.
The object-oriented software systems frequently evolve to meet new change requirements. Understanding the characteristics of changes aids testers and system designers to improve the quality of softwares. Identifying important modules becomes a key issue in the process of evolution. In this context, a novel network-based approach is proposed to comprehensively investigate change distributions and the correlation between centrality measures and the scope of change propagation. First, software dependency networks are constructed at class level. And then, the number of times of cochanges among classes is minded from software repositories. According to the dependency relationships and the number of times of cochanges among classes, the scope of change propagation is calculated. Using Spearman rank correlation analyzes the correlation between centrality measures and the scope of change propagation. Three case studies on java open source software projects Findbugs, Hibernate, and Spring are conducted to research the characteristics of change propagation. Experimental results show that (i) change distribution is very uneven; (ii) PageRank, Degree, and CIRank are significantly correlated to the scope of change propagation. Particularly, CIRank shows higher correlation coefficient, which suggests it can be a more useful indicator for measuring the scope of change propagation of classes in object-oriented software system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.