Safety-critical and safety-related systems are increasingly based, at least partially, on the use of software or logic components. High integrity claims are usually placed on these systems, which means that their probability of failure during operation should be below a specified level in order to ensure that risk of operation is sufficiently low. All this implies that some knowledge needs to be gained on the dependability of these systems or components in actual field use. Dependability assessment methods for software are not as well established as for hardware. Currently, formal proofs and statistical testing methods provide the only methods that have the potential to assess software dependability quantitatively. The present paper explores the applicability of statistical (software) testing (ST) to the example of a real safety-related software application. It discusses the key points arising in this task and highlights the unique and important role ST can play within the wider task of software verification.Software-or logic-based components increasingly play a role in the safety-critical industry, e.g. the nuclear industry, avionics, rail, automotive, and medical applications. For example, in nuclear safetyrelated systems the situation is encountered where obsolete hardwired components have to be replaced with programmable electronic systems (PESs). Also, smart sensors, which contain a software element, are increasingly employed here, and computer-based data processing systems are used to supply controlroom staff with essential information on the status of the plant. The software parts of these systems exhibit systematic failure and design faults rather than random hardware failure processes, and this creates important issues for safety assessment. Models to cope with random failure processes exist and have been widely used, but they do not apply to the presence of systematic failures. An additional problem arises in the assessment of smart devices. These devices are often commercial-off-the-shelf (COTS) components, and assessment techniques that would commonly be used are not available since it is sometimes not possible or permissible to access the code in order to perform tasks such as code inspection or static analysis.In order to address the issue of software failure, current industry standards [1] use clauses recommending system design techniques, which tools to use, how to manage the project team, and how to test the code (partition testing, coverage-based testing, etc.). Depending on which recommendations are followed, the users of a software-based system can claim that the dependability of their system falls within one of many safety integrity levels that represent target dependability or availability measures. Important as they are, these recommendations cannot actually measure or quantify whether target dependability has been achieved. Therefore a key requirement for any prospective software assurance method is increased objectivity through quantified dependability statements. There are two main metho...