Objective measurement of test quality is one of the key issues in software testing. It has been a major research focus for the last two decades. Many test criteria have been proposed and studied for this purpose. Various kinds of rationales have been presented in support of one criterion or another. We survey the research work in this area. The notion of adequacy criteria is examined together with its role in software dynamic testing. A review of criteria classification is followed by a summary of the methods for comparison and assessment of criteria.
Approximate matching of strings is reviewed with the aim of surveying techniques suitable for finding an item in a database when there may be a spelling mistake or other error in the keyword. The methods found are classified as either equivalence or similarity problems. Equivalence problems are seen to be readily solved using canonical forms. For sinuiarity problems difference measures are surveyed, with a full description of the wellestablmhed dynamic programming method relating this to the approach using probabilities and likelihoods. Searches for approximate matches in large sets using a difference function are seen to be an open problem still, though several promising ideas have been suggested. Approximate matching (error correction) during parsing is briefly reviewed.
This paper examines optimization within a relational data base system. It considers the optimization of a single query defined by an expression of the relational algebra. The expression is transformed into an equivalent expression or sequence of expressions that cost less to evaluate. Alternative transformations, and combinations of several transformations, are analyzed. Measurements on an experimental data base showed improvements, especially in cases where the original expression would be impracticably slow in its execution. A small overhead was incurred, which would be negligible for large data bases. 244 P. A. V. HALL
The term ‘inductive inference’ denotes the process of hypothesizing a general rule from examples. It can be considered as the inverse process of program testing, which is a process of sampling the behaviour of a program and gathering confidence in the quality of the software from the samples. As one of the fundamental and ubiquitous components of intelligent behaviour, much effort has been spent on both the theory and practice of inductive inference as a branch of artificial intelligence. In this paper, software testing and inductive inference are reviewed to illustrate how the rich and solid theory of inductive inference can be used to study the foundations of software testing.
The Open University's M880 Software Engineering is a postgraduate distance education course aimed at software professionals. The case study element of the course (approximately 100 hours of study) is presented through an innovative interactive multimedia simulation of a software house Open Software Solutions (OSS). The student 'joins' OSS as an employee and performs various tasks as a member of the company's project teams. The course is now in its sixth presentation and has been studied by over 1500 students. In this paper, we present the background to the development, and a description of the environment and student tasks.
SUMMARY The routine manual encoding of pathological data, using the SNOP and SNOMED systems at two London teaching hospitals, was reviewed. The error rates in the two departments were compared and the causes analysed. The relative merits of SNOP and SNOMED were considered. Methods to optimise the efficiency of manual encoding are suggested and the importance of accuracy in coding is emphasised.A routine histopathology department produces a vast quantity of data, and it is important that such data are carefully recorded and suitably indexed, usually by numerical code. Storage and retrieval of the information has often entailed a card index, but computers are increasingly being used because of their speed, storage capacity, and their potential for data manipulation. Although the use of computers to perform the data encoding has been advocated,'`3 this is not widely used at present: it is more common for the reporting pathologist to perform the encoding manually, often using either the SNOP4 or SNOMED5 systems. The accuracy of such manual encoding is clearly important, and it is surprising that there have been few studies on the magnitude and causes of errors in non-automated coding systems.6 The widespread adoption of computers in all branches of pathology may ease data handling but it cannot compensate for inaccurate data.The departments of morbid anatomy at both the London Hospital and University College Hospital (UCH) use the same computerised report system,7 and both use a system of manual data encoding by the reporting pathologists. They differ in that UCH uses SNOMED while the London Hospital uses SNOP. The rationale for report coding is to allow cases to be retrieved for research series, teaching purposes, and for departmental auditing. The computerised records system permits the rapid collection of previous reports on material from each patient. As a result of our personal experience of the results of poor coding, we decided to perform a comparative study of error rates and their causes at both institutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.