For decades, our lives have depended on the safe operation of automated mechanisms around and inside us. The autonomy and complexity of these mechanisms is increasing dramatically. Autonomous systems such as self-driving cars rely heavily on inductive inference and complex software, both of which confound traditional software-safety techniques that are focused on amassing sufficient confirmatory evidence to support safety claims. In this paper we survey existing methods and tools that, taken together, can enable a new and more productive philosophy for software safety that is based on Karl Popper's idea of falsificationism.
Trusting self-driving carsFor decades, our lives have depended on the safe operation of automated mechanisms around and inside us. However, the autonomy of these mechanisms is increasing dramatically, going from comparatively simple drive-by-wire control to fully autonomous self-driving cars. The safety risks posed by autonomous systems cannot be mitigated through mechanical interlocks or similar tried-and-true techniques. Furthermore these autonomous systems will operate in unstructured environments (including highways not designed for self-driving cars and unpredictable weather conditions) that will present a myriad of unexpected situations. Without question, automation holds the promise of reducing the rates of accidents; for example, self-driving cars have the potential to virtually eliminate accidents due to inattentive drivers. However, when he pays attention, a human driver has tremendous capacity for reacting responsibly to circumstances for which he has not been explicitly trained. With the human out of the loop, the autonomous car is far less capable of handling unforeseen circumstances. By definition, an "unstructured environment" such as a real-world road network includes plenty of unforeseen con-PREPRINT: G. Meyer & S. Beiker (eds.) Road Vehicle Automation 2