Functional near-infrared spectroscopy (fNIRS) is a versatile imagining modality whose popularity is increasing exponentially in the neuroimaging society. Our research attempts to quantify workload in a natural driving scenario with multiple parallel tasks using fNIRS. Nine young adults participated in this study where they drove in a driving simulator for a period of 100 minutes while we continuously recorded fNIRS data. We used an n-back task to induce different workload levels forcing the participants to remember the previous one, two, three or four speed signs and adjust their speed accordingly while they interact with traffic in the virtual reality driving simulator scenario. Our results indicate that measuring the hemodynamic responses from the bilateral prefrontal cortex (PFC) can be used reliably to quantify cognitive workload levels even in more complex naturalistic tasks.
In order to reduce human errors in the interaction with in safety critical assistance systems it is crucial to consequently include the characteristics of the human operator already in the early phases of the design process. In this paper we present a cognitive architecture for simulating man-machine interaction in the aeronautics and automotive domain. Though both domains have their own characteristics we think that it is possible to apply the same core architecture to support pilot as well driver centered design of assistance systems. This text shows how phenomena relevant in the automobile or aviation environment can be integrated in the same cognitive architecture.
Abstract. This paper presents a cognitive modelling approach to predict pilot errors and error recovery during the interaction with aircraft cockpit systems. The model allows execution of flight procedures in a virtual simulation environment and production of simulation traces. We present traces for the interaction with a future Flight Management System that show in detail the dependencies of two cognitive error production mechanisms that are integrated in the model: Learned Carelessness and Cognitive Lockup. The traces provide a basis for later comparison with human data in order to validate the model. The ultimate goal of the work is to apply the model within a method for the analysis of human errors to support human centred design of cockpit systems. As an example we analyze the perception of automatic flight mode changes.
Causal Models are increasingly suggested as a means to reason about the behavior of cyber-physical systems in socio-technical contexts. They allow us to analyze courses of events and reason about possible alternatives. Until now, however, such reasoning is confined to the technical domain and limited to single systems or at most groups of systems. The humans that are an integral part of any such socio-technical system are usually ignored or dealt with by "expert judgment". We show how a technical causal model can be extended with models of human behavior to cover the complexity and interplay between humans and technical systems. This integrated socio-technical causal model can then be used to reason not only about actions and decisions taken by the machine, but also about those taken by humans interacting with the system. In this paper we demonstrate the feasibility of merging causal models about machines with causal models about humans and illustrate the usefulness of this approach with a highly automated vehicle example. Bottle Shatters Billy throws Suzy hits Billy hits Suzy throws (a) Classic version. Bottle Shatters Billy throws Suzy hits Billy hits Suzy throws Military Training Cond. Reaction(b) With military conditioning.Figure 1: The rock-throwing example.definitions of causality that will not lead to such counterintuitive results culminated in the Halpern-Pearl definition of causality [10]. It resolves the issues by introducing a so called preemption relation [12] that can express that Suzy's throw preempted Billy's throw (see Figure 1a). While such a model expresses the objective facts very well, we cannot reason about why Suzy threw faster. To do so, we would need a model of Suzy's mind. If, for example, she was a soldier and, as part of her training, was conditioned to throw rocks the moment a bottle appeared in her field of view 1 , we might not simply say that "Suzy throwing the rock caused the bottle to shatter", but extend our causal chain to "Her military training caused Suzy to automatically throw the rock at the bottle, shattering it". Instead of just blaming Suzy, we could also consider her military training.Returning to CPS and their interaction with humans, we can now utilize existing models of human behavior, as in Figure 1b, transform them to causal models and link them with causal models of the technical systems. Instead of just saying "The car crashed, because the driver pressed the red button", we can say that "Drivers are conditioned to press the red button in an emergency; this lead to the driver pressing the button and the car crashing". While it is true that we might not have enough data in many cases, in the cases where we do have enough data, like the example in Section 3, and can actually extend the causal model into the human mind, we can gain valuable insights and avoid the unsatisfying generic answer "human error".In this paper we investigate the problem of joining causal models of technical systems with causal models of their operators and people they interact with. As a solut...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.