Operating systems represent complex interactive software systems that control access to information. Vulnerabilities present in such software represent significant security risks. In this paper, we examine the feasibility of quantitatively characterization of vulnerabilities. For Windows 98 and Windows NT 4.0, we present plots for cumulative numbers of vulnerabilities found. A time-based model for the total vulnerabilities discovered is proposed and is fitted to the data for two operating systems. We introduce a measure termed equivalent effort and propose an alternative model which is analogous to the software reliability growth models. We have shown that both models fit well and the fit is significant. We discuss the feasibility of using a new measure termed vulnerability density. We present the data on known defect densities for the two operating systems and discuss the relation between densities of vulnerabilities and the general defects. This relationship could lead us to potential ways of estimating the number of vulnerabilities in future.
Random testing is a well known concept that requires that each test is selected randomly regardless of the test previously applied. This paper introduces the concept of antirandom testing. In this testing strategy each test applied is chosen such that its total distance from all previous tests is maximum. Two distance measures are de ned. Procedures to construct antirandom sequences are developed. A checkpoint encoding scheme is introduced that allows automatic generation of e cient test cases. Further developments and studies needed are identi ed.
Security vulnerabilities in servers and operating systems are software defects that represent great risks.Both software developers and users are struggling to contain the risk posed by these vulnerabilities. The vulnerabilities are discovered by both developers and external testers throughout the life-span of a software system. A few models for the vulnerability discovery process have just been published recently.Such models will allow effective resource allocation for patch development and are also needed for evaluating the risk of vulnerability exploitation. Here we examine these models for the vulnerability discovery process. The models are examined both analytically and using actual data on vulnerabilities discovered in three widely-used systems. The applicability of the proposed models and significance of the parameters involved are discussed. The limitations of the proposed models are examined and major research challenges are identified.
Software test-coverage measures" quantify the degree of thoroughness of testing. Tools are now available that measure test-coverage in terms of blocks, branches, computation-uses, predicate-uses, etc. that are covered. This paper models the relations among testing time, coverage, and reliability. An LE (logarithmic-exponential) model is presented that relates testing effort to test coverage (block, branch, computation-use, or predicate-use). The model is based on the hypothesis that the enumerable elements (like branches or blocks) for any coverage measure have various probabilities of being exercised; just like defects have various probabilities of being encountered. This model allows relating a test-coverage measure directly with defect-coverage. The model is fitted to 4 data-sets for programs with real defects. In the model, defect coverage can predict the time to next failure. The LE model can eliminate variables like test-application strategy from consideration. It is suitable for high reliability applications where automatic (or manual) test generation is used to cover enumerables which have not yet been tested. The data-sets used suggest the potential of the proposed model. The model is simple and easily explained, and thus can be suitable for industrial use. The LE model is based on the time-based logarithmic software-reliability growth model. It considers that: at 100% coverage for a given enumerable, all defects might not yet have been found.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.