This paper presents two strategies for multi‐way testing (i.e. t‐way testing with t>2). The first strategy generalizes an existing strategy, called in‐parameter‐order, from pairwise testing to multi‐way testing. This strategy requires all multi‐way combinations to be explicitly enumerated. When the number of multi‐way combinations is large, however, explicit enumeration can be prohibitive in terms of both the space for storing these combinations and the time needed to enumerate them. To alleviate this problem, the second strategy combines the first strategy with a recursive construction procedure to reduce the number of multi‐way combinations that have to be enumerated. Both strategies are deterministic, i.e. they always produce the same test set for the same system configuration. This paper reports a multi‐way testing tool called FireEye, and provides an analytic and experimental evaluation of the two strategies. Copyright © 2007 John Wiley & Sons, Ltd.
Most existing work on t-way testing has focused on 2-way (or pairwise) testing, which aims to detect faults caused by interactions between any two parameters. However, faults can also be caused by interactions involving more than two parameters. In this paper, we generalize an existing strategy, called In-Parameter-Order (IPO), from pairwise testing to t-way testing. A major challenge of our generalization effort is dealing with the combinatorial growth in the number of combinations of parameter values. We describe a t-way testing tool, called FireEye, and discuss design decisions that are made to enable an efficient implementation of the generalized IPO strategy. We also report several experiments that are designed to evaluate the effectiveness of FireEye.
The NIST Software Assurance Metrics And Tool Evaluation (SAMATE) project conducted the fourth Static Analysis Tool Exposition (SATE IV) to advance research in static analysis tools that find security defects in source code. The main goals of SATE were to enable empirical research based on large test sets, encourage improvements to tools, and promote broader and more rapid adoption of tools by objectively demonstrating their use on production software. Briefly, eight participating tool makers ran their tools on a set of programs. The programs were four pairs of large code bases selected in regard to entries in the Common Vulnerabilities and Exposures (CVE) dataset and approximately 60 000 synthetic test cases, the Juliet 1.0 test suite. NIST researchers analyzed approximately 700 warnings by hand, matched tool warnings to the relevant CVE entries, and analyzed over 180 000 warnings for Juliet test cases by automated means. The results and experiences were reported at the SATE IV Workshop in McLean, VA, in March, 2012. The tool reports and analysis were made publicly available in January, 2013. SATE is an ongoing research effort with much work still to do. This paper reports our analysis to date which includes much data about weaknesses that occur in software and about tool capabilities. Our analysis is not intended to be used for tool rating or tool selection. This paper also describes the SATE procedure and provides our observations based on the data collected. Based on lessons learned from our experience with previous SATEs, we made the following major changes to the SATE procedure. First, we introduced the Juliet test suite that has precisely characterized weaknesses. Second, we improved the procedure for characterizing vulnerability locations in the CVE-selected test cases. Finally, we provided teams with a virtual machine image containing the test cases properly configured to compile the cases and ready for analysis by tools. This paper identifies several ways in which the released data and analysis are useful. First, the output from running many tools on production software is available for empirical research. Second, our analysis of tool reports indicates the kinds of weaknesses that exist in the software and that are reported by the tools. Third, the CVE-selected test cases contain exploitable vulnerabilities found in practice, with clearly identified locations in the code. These test cases can help practitioners and researchers improve existing tools and devise new techniques. Fourth, tool outputs for Juliet cases provide a rich set of data amenable to mechanical analysis. Finally, the analysis may be used as a basis for a further study of weaknesses in code and of static analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.