An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria was performed. The experiment was designed so as to overcome some of the deEeieneies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are alledges adequate test sets. All-uses was significantly more effective than all-edges for Eve of the subjects, and appeared guaranteed to detect the error in four of them. Further analysis showed that in four of these subjects, all-uses adequate test sets were more effective than all-edges adequate test sets of similar size. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of defhdtion-use associations or edges covered by it increases. The evidence did not strongly support this cot@cture. Error exposing ability was shown to be strongly positively correlated to percentage of covered deEnit.ion-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four (different) subjects, but the relationship was weaker.
A person testing a program has many methods to choose from, but little solid information about how these methods compare. Where analytic comparisons do exist, their significance is often in doubt. In this paper we examine various comparisons that have been used or proposed for test data selection and adequacy criteria.We characterize them by type and identify their strengths and weaknesses. We examine useful properties of comparisons and study the relationship between analytical and probabilistic comparisons. We find that analytical comparisons provide information of limited value, and that probabilistic comparisons overcome some of these limitations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.