Safety-compromising bugs in software-controlled systems are often hard to detect. In a 2007 DARPA Urban Challenge vehicle, such a defect remained hidden during more than 300 miles of test-driving, manifesting for the first time during the competition. With this incident as an example, the authors discuss formalisms and techniques available for safety analysis of cyber-physical systems.T urning left during the third round of the 2007 DARPA Urban Challenge, Alice-an autonomous Ford Econoline van-dangerously deviated from the computer-generated path and started stuttering in the middle of a busy intersection. Earlier in the competition, Alice had completed two rounds of missions involving on-and off-road driving, parking, merging, and U-turns while obeying traffic rules-all with style and with no human driver. This was a testament to 15 months of programming, debugging, and test-driving by a team of 50 students and researchers from Caltech, Jet Propulsion Laboratory, and Northrop Grumman.Alice's onboard hardware included 10 cameras; eight laser radars (LADARs); two ordinary radars; an inertial navigation system; two pan-tilt units; 25 CPUs; and actuators for the steering, throttle, brake, transmission, and ignition (see Figure 1). The software included the sensing and the control systems, with 48 individual programs and more than 100 concurrent threads. The control system had modules for making actuation decisions at different spatial and temporal scales on the basis of the processed data streams (see Figure 2a). For instance, the mission planner computed routes for completing high-level missions using a road map, the traffic planner ensured traffic rule conformance on the basis of finite state machines (FSMs), the path planner generated waypoints on the basis of inputs from the upper levels and obstacles, and the controller computed acceleration and steering signals for the vehicle to follow the waypoints. For safety, we "unit tested" each component against reasonable-sounding but informal assumptions about the other components and its own physical environment.An unforeseen interaction among the control modules and the physical environment, not witnessed during more than 300 miles of autonomous test-driving and hours of extensive simulations, led to this safety violation and Alice's unfortunate disqualification. What happened? We implemented a reactive obstacle avoidance (ROA) subsystem to rapidly decelerate Alice for collision avoidance. Under normal operation, ROA would send a brake command to the controller when Alice got too close to an obstacle or when it deviated too much from the planned path. The controller would then rapidly stop Alice, and the path planner would generate