“…In his studies, however, an automation failure was potentially "obvious" to the users, since displays explicitly showed when parameters were going out of range and provided a means for the person to 'check the computer's competence. In reality, many automation failures are much more opaque than that, sometimes due to the presence of latent errors (Reason, 1991), reasoning based on incorrect, noisy, or uncertain data (Layton, Smith, and McCoy, 1994;Guerlain, 1993b) the proliferation of modes and high coupling in many such systems, (Sarter and Woods, 1994) and poor feedback as to system state (Norman, 1990). A recent study of airline pilots and dispatchers, for example, showed that in a scenario where the computer's brittleness leads to a poor recommendation, the generation of a suggestion by the computer early in the person's own problem solving can create a 30% increase in inappropriate plan selection over users of a manual version of the system (Layton et al, 1994).…”