Objective measurement of test quality is one of the key issues in software testing. It has been a major research focus for the last two decades. Many test criteria have been proposed and studied for this purpose. Various kinds of rationales have been presented in support of one criterion or another. We survey the research work in this area. The notion of adequacy criteria is examined together with its role in software dynamic testing. A review of criteria classification is followed by a summary of the methods for comparison and assessment of criteria.
While agile practices can match the needs of large organizations-especially for small, collocated teams-integrating new practices with existing processes and quality systems will require further tailoring.
Abstract.A plethora of subjective evidence exists to support the use of agile development methods on non-life-critical software projects. Until recently, Extreme Programming and Agile Methods have been sparsely applied to Mission Critical software products. This paper gives some objective evidence, through our experiences, that agile methods can be applied to life critical systems. This paper describes a Large System Mission Critical software project developed using an agile methodology. The paper discusses our development process through some of the key components of Extreme Programming (XP).
The term ‘inductive inference’ denotes the process of hypothesizing a general rule from examples. It can be considered as the inverse process of program testing, which is a process of sampling the behaviour of a program and gathering confidence in the quality of the software from the samples. As one of the fundamental and ubiquitous components of intelligent behaviour, much effort has been spent on both the theory and practice of inductive inference as a branch of artificial intelligence. In this paper, software testing and inductive inference are reviewed to illustrate how the rich and solid theory of inductive inference can be used to study the foundations of software testing.
We address the problems of estimating the reliability of multiple-version software, and improve the understanding of the various ways failure dependence between versions can arise. Specifically, we step from the previous conceptual models, which described what behaviour could be expected "on average" from a randomly chosen pair of "independently generated" versions to predictions using specific information about a given pair of versions. The concept of "variation of difficulty" between situations to which software may be subject is central to the previous models cited. We show that it has more far-reaching implications than previously found. We show the practical implications of varying probabilities of failure over input subdomains or operating regimes. A direct practical gain for designers, users and regulators is the possibility of estimating useful upper and lower bounds on the reliability of a two-versions system.
Abstract. SCADA and industrial control systems have been traditionally isolated in physically protected environments. However, developments such as standardisation of data exchange protocols and increased use of IP, emerging wireless sensor networks and machine-to-machine communication mean that in the near future related threat vectors will require consideration too outside the scope of traditional SCADA security and incident response. In the light of the significance of SCADA for the resilience of critical infrastructures and the related targeted incidents against them (e.g. the development of stuxnet), cyber security and digital forensics emerge as priority areas. In this paper we focus on the latter, exploring the current capability of SCADA operators to analyse security incidents and develop situational awareness based on a robust digital evidence perspective. We look at the logging capabilities of a typical SCADA architecture and the analytical techniques and investigative tools that may help develop forensic readiness to the level of the current threat environment requirements. We also provide recommendations for data capture and retention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.