Objective In this review, we investigate the relationship between agent transparency, Situation Awareness, mental workload, and operator performance for safety critical domains. Background The advancement of highly sophisticated automation across safety critical domains poses a challenge for effective human oversight. Automation transparency is a design principle that could support humans by making the automation’s inner workings observable (i.e., “seeing-into”). However, experimental support for this has not been systematically documented to date. Method Based on the PRISMA method, a broad and systematic search of the literature was performed focusing on identifying empirical research investigating the effect of transparency on central Human Factors variables. Results Our final sample consisted of 17 experimental studies that investigated transparency in a controlled setting. The studies typically employed three human-automation interaction types: responding to agent-generated proposals, supervisory control of agents, and monitoring only. There is an overall trend in the data pointing towards a beneficial effect of transparency. However, the data reveals variations in Situation Awareness, mental workload, and operator performance for specific tasks, agent-types, and level of integration of transparency information in primary task displays. Conclusion Our data suggests a promising effect of automation transparency on Situation Awareness and operator performance, without the cost of added mental workload, for instances where humans respond to agent-generated proposals and where humans have a supervisory role. Application Strategies to improve human performance when interacting with intelligent agents should focus on allowing humans to see into its information processing stages, considering the integration of information in existing Human Machine Interface solutions.
A flight simulator experiment was set up to study relevant human factors tools for situation awareness assessment of pilots. A specific scenario was designed in which a malfunction of the aircraft was introduced during flight: an indicated air speed discrepancy. Pilot behavior was studied while pilots tried to figure out the correct speed. Eye movement metrics alone provided an insufficient picture of pilot situation awareness, but when purposefully combined with subjective, self-rating metrics, they offered a more comprehensive look at situation awareness, covering all 3 levels of Endsley's situation awareness definition.Over the past two or three decades, the concept of situation awareness (SA) has received considerable attention from the human factors (HF) research community. Although originally a term used within (military) aviation, SA has developed as a major concern in many other domains where people operate complex, dynamic systems (e.g., maintenance, air traffic control, medical systems, and the nuclear power industry). Achieving SA is one of the most challenging aspects of these operators' jobs and is central to good decision making and performance. This stems largely from a growing concern with the effects of widespread automation and advanced information systems on the ability of humans to take in and comprehend exactly what is going on without becoming confused, overloaded, or error-prone. As a consequence, valid and meaningful measures of SA are required to help us assess the design and use of complex systems in simulations Correspondence should be sent to Henk van Dijk, Air Transport Training Simulation and Operator
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.