Previous studies have shown that the perceived times of voluntary actions and their effects are perceived as shifted towards each other, so that the interval between action and outcome seems shortened. This has been referred to as ‘intentional binding’ (IB). However, the generality of this effect remains unclear. Here we demonstrate that Intentional Binding also occurs in complex control situations. Using an aircraft supervision task with different autopilot settings, our results first indicated a strong relation between measures of IB and different levels of system automation. Second, measures of IB were related to explicit agency judgement in this applied setting. We discuss the implications for the underlying mechanisms, and for sense of agency in automated environments.
Recent studies have suggested that individuals are not able to develop a sense of joint agency during joint actions with automata. We sought to examine whether this lack of joint agency is linked to individuals' inability to co-represent the automaton-generated actions. Fifteen participants observed or performed a Simon response time task either individually, or jointly with another human or a computer. Participants reported the time interval between their response (or the co-actor response) and a subsequent auditory stimulus, which served as an implicit measure of participants' sense of agency. Participants' reaction times showed a classical Simon effect when they were partnered with another human, but not when they collaborated with a computer. Furthermore, participants showed a vicarious sense of agency when co-acting with another human agent but not with a computer. This absence of vicarious sense of agency during human-computer interactions and the relation with action corepresentation are discussed.
Increasing the level of automation in air traffic management is seen as a measure to increase the performance of the service to satisfy the predicted future demand. This is expected to result in new roles for the human operator: he will mainly monitor highly automated systems and seldom intervene. Therefore, air traffic controllers (ATCos) would often work in a supervisory or control mode rather than in a direct operating mode. However, it has been demonstrated how human operators in such a role are affected by human performance issues, known as Out-Of-The-Loop (OOTL) phenomenon, consisting in lack of attention, loss of situational awareness and de-skilling. A countermeasure to this phenomenon has been identified in the adaptive automation (AA), i.e., a system able to allocate the operative tasks to the machine or to the operator depending on their needs. In this context, psychophysiological measures have been highlighted as powerful tool to provide a reliable, unobtrusive and real-time assessment of the ATCo’s mental state to be used as control logic for AA-based systems. In this paper, it is presented the so-called “Vigilance and Attention Controller”, a system based on electroencephalography (EEG) and eye-tracking (ET) techniques, aimed to assess in real time the vigilance level of an ATCo dealing with a highly automated human–machine interface and to use this measure to adapt the level of automation of the interface itself. The system has been tested on 14 professional ATCos performing two highly realistic scenarios, one with the system disabled and one with the system enabled. The results confirmed that (i) long high automated tasks induce vigilance decreasing and OOTL-related phenomena; (ii) EEG measures are sensitive to these kinds of mental impairments; and (iii) AA was able to counteract this negative effect by keeping the ATCo more involved within the operative task. The results were confirmed by EEG and ET measures as well as by performance and subjective ones, providing a clear example of potential applications and related benefits of AA.
The increasing presence of automation between operators and automated systems tends to disrupt operators from action outcomes, leading them to leave the control loop. The theoretical framework of agency suggests that priming the operator about the system's upcoming behaviour could help restore an appropriate sense of control and increase user acceptance of what the system is doing. In a series of two experiments, we test whether providing information about what the system is about to do next leads to an increase in the level of user acceptance, concomitant with an increase in control and performance. Using an aircraft supervision task, we demonstrated the benefit of prime messages regarding system acceptance and performance. Taken together, our results indicate that the principles proposed by this framework could be used to improve human-machine interaction and maintain a high level of sense of control in supervisory tasks. Practitioner Summary: The out-of-the-loop performance problem is a major potential consequence of automation, leaving operators helpless to takeover automation in case of failure. Using an aircraft supervision task, the following article illustrates how the psychological approach of agency can help improving human-system interactions by designing more acceptable and more controllable automated interfaces.
To satisfy the increasing demand for safer critical systems, engineers have integrated higher levels of automation, such as glass cockpits in aircraft, power plants, and driverless cars. These guiding principles relegate the operator to a monitoring role, increasing risks for humans to lack system understanding. The out of the loop performance problem arises when operators suffer from complacency and vigilance decrement; consequently, when automation does not behave as expected, understanding the system or taking back manual control may be difficult. Close to the out of the loop problem, mind wandering points to the propensity of the human mind to think about matters unrelated to the task at hand. This article reviews the literature related to both mind wandering and the out of the loop performance problem as it relates to task automation. We highlight studies showing how these phenomena interact with each other while impacting human performance within highly automated systems. We analyze how this proximity is supported by effects observed in automated environment, such as decoupling, sensory attention, and cognitive comprehension decrease. We also show that this link could be useful for detecting out of the loop situations through mind wandering markers. Finally, we examine the limitations of the current knowledge because many questions remain open to characterize interactions between out of the loop, mind wandering, and automation.
Recent evidences showing that mind wandering might fill the time saved by automation are particularly worrying when taking into account the negative effect of mind wandering on short-term performance. 17 participants performed an obstacle avoidance task under manual and automated conditions in 2 sessions lasting 45 minutes each. We recorded attentional probes, oculometry and answers to the Task Load Index after each session. Subjects perceived the manual condition as more demanding than the automated one. We highlighted a significant influence of automation on the mind wandering frequency after some time. Multiple phenomena may play a role, such as complacency and decoupling from the task at hand. Pupil diameter decreased during mind wandering versus focus periods, with a stable amplitude. Mind wandering knowledge could be used in a near future to characterize and quantify an operator's state of mind regarding automation related problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.