Background.
Recent years have seen an increasing interest in cross-project defect prediction (CPDP), which aims to apply defect prediction models built on source projects to a target project. Currently, a variety of (complex) CPDP models have been proposed with a promising prediction performance.
Problem.
Most, if not all, of the existing CPDP models are not compared against those simple module size models that are easy to implement and have shown a good performance in defect prediction in the literature.
Objective.
We aim to investigate how far we have really progressed in the journey by comparing the performance in defect prediction between the existing CPDP models and simple module size models.
Method.
We first use module size in the target project to build two simple defect prediction models, ManualDown and ManualUp, which do not require any training data from source projects. ManualDown considers a larger module as more defect-prone, while ManualUp considers a smaller module as more defect-prone. Then, we take the following measures to ensure a fair comparison on the performance in defect prediction between the existing CPDP models and the simple module size models: using the same publicly available data sets, using the same performance indicators, and using the prediction performance reported in the original cross-project defect prediction studies.
Result.
The simple module size models have a prediction performance comparable or even superior to most of the existing CPDP models in the literature, including many newly proposed models.
Conclusion.
The results caution us that, if the prediction performance is the goal, the real progress in CPDP is not being achieved as it might have been envisaged. We hence recommend that future studies should include ManualDown/ManualUp as the baseline models for comparison when developing new CPDP models to predict defects in a complete target project.
Software testing consumes a significant portion of software effort. Program entities such as branch or definition-use pairs (DUPs) are used in diverse software development tasks. In this study, the authors present a novel evolution-based approach to generating test data for all definition-use coverage. First, the subset of DUPs, which can ensure the coverage adequacy, is computed by a reduction algorithm for the whole DUPs. Then they apply a genetic algorithm to generate test data for the subset of DUPs. Furthermore, the fitness of an individual depends on the matching degree between the traversed path and the definition-clear path of each target DUP. They also investigate the coverage and the size of test cases of test data generation by applying the authors' approach on 15 widely used subject programs. The experimental results show that their approach can reduce the size of test cases that generated without affecting the coverage rate.
Software defect prediction has attracted much attention of researchers in software engineering. At present, feature selection approaches have been introduced into software defect prediction, which can improve the performance of traditional defect prediction (known as within-project defect prediction, WPDP) effectively. However, the studies on feature selection are not sufficient for cross-project defect prediction (CPDP). In this paper, we use the feature subset selection and feature ranking approaches to explore the effectiveness of feature selection for CPDP. An empirical study is conducted on NASA and PROMISE datasets. The results show that both the feature subset selection and feature ranking approaches can improve the performance of CPDP. Therefore, we should select the representative feature subset or set a reasonable proportion of selected features to improve the performance of CPDP in future studies. INDEX TERMS Software defect prediction, cross-project defect prediction, feature selection, feature ranking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.