Practical and theoretical issues are presented concerning the design, implementation, and use of a good, minimal standard random number generator that will port to virtually all systems.
Software veri cation encompasses a wide range of techniques and activities that are geared towards demonstrating that software is reliable. Veri cation techniques such as testing provide a way t o assess the likelihood that software will fail during use. This paper introduces a di erent t y p e o f v eri cation that shows how l i k ely it is that an incorrect program will not fail. Our veri cation applies fault-injection methods to predict where actual faults are more likely to hide. This veri cation can be combined with software testing to assess a con dence that the code is not hiding faults. Code that hides faults is di cult to test. In order to minimize the problem of hidden faults, we seek methods for identifying and isolating source code that is likely to hide faults. We also introduce the notion of \information loss," a characteristic that can be measured during the early phases of design to suggest where the planned software is likely to harbor faults that will be di cult to uncover during testing.
This paper presents a fault-injection methodology that predicts how s o f t ware will behave when: (1) components of the software fail, (2) hardware components external to the software fail, (3) human factor errors occur and bad input is provided to the software, and (4) the software is executing in unlikely operational modes. Because of the enterprise-critical nature of many o f t o d a y's software systems, it is vital that these system are robust enough to handle problems that originate externally as well as the expected problems that will arise from internal defects. Also, this paper presents four cases studies that highlight the bene t of this analysis for both safety-critical systems and non-safety critical systems.
Floridi and Sanders, seminal work, ''On the morality of artificial agents'' has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as ''artificial agents.'' Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the 'true' status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.