“…It therefore accompanies voluntary actions 3–6 , allows oneself to feel distinct from others 7–9 , and be responsible for its own actions 2,6,10,11 . Studies show SoA emerges from, and is particularly sensitive to any disruption in, the congruous flow of intentional actions to expected sensory outcomes 12 . Crucially, the degradation of this experience characterizes certain psychiatric and neurological disorders 13–15 .…”
Sense of agency (SoA) refers to the experience or belief that one’s own actions caused an external event. Here we present a model of SoA in the framework of optimal Bayesian cue integration with mutually involved principles, namely reliability of action and outcome sensory signals, their consistency with the causation of the outcome by the action, and the prior belief in causation. We used our Bayesian model to explain the intentional binding effect, which is regarded as a reliable indicator of SoA. Our model explains temporal binding in both self-intended and unintentional actions, suggesting that intentionality is not strictly necessary given high confidence in the action causing the outcome. Our Bayesian model also explains that if the sensory cues are reliable, SoA can emerge even for unintended actions. Our formal model therefore posits a precision-dependent causal agency.
“…It therefore accompanies voluntary actions 3–6 , allows oneself to feel distinct from others 7–9 , and be responsible for its own actions 2,6,10,11 . Studies show SoA emerges from, and is particularly sensitive to any disruption in, the congruous flow of intentional actions to expected sensory outcomes 12 . Crucially, the degradation of this experience characterizes certain psychiatric and neurological disorders 13–15 .…”
Sense of agency (SoA) refers to the experience or belief that one’s own actions caused an external event. Here we present a model of SoA in the framework of optimal Bayesian cue integration with mutually involved principles, namely reliability of action and outcome sensory signals, their consistency with the causation of the outcome by the action, and the prior belief in causation. We used our Bayesian model to explain the intentional binding effect, which is regarded as a reliable indicator of SoA. Our model explains temporal binding in both self-intended and unintentional actions, suggesting that intentionality is not strictly necessary given high confidence in the action causing the outcome. Our Bayesian model also explains that if the sensory cues are reliable, SoA can emerge even for unintended actions. Our formal model therefore posits a precision-dependent causal agency.
“…Thus, even though both kinds of support systems can provide highly reliable recommendations, an AI is likely perceived more competent than a DSS. Second, AI also extends DSS with a higher level of agency and autonomy 15 , 16 . This might result in the perception that the AI is the deliberate initiator of the actions and their effects.…”
Technological advancements are ubiquitously supporting or even replacing humans in all areas of life, bringing the potential for human-technology symbiosis but also novel challenges. To address these challenges, we conducted three experiments in different task contexts ranging from loan assignment over X-Ray evaluation to process industry. Specifically, we investigated the impact of support agent (artificial intelligence, decision support system, or human) and failure experience (one vs. none) on trust-related aspects of human-agent interaction. This included not only the subjective evaluation of the respective agent in terms of trust, reliability, and responsibility, when working together, but also a change in perspective to the willingness to be assessed oneself by the agent. In contrast to a presumed technological superiority, we show a general advantage with regard to trust and responsibility of human support over both technical support systems (i.e., artificial intelligence and decision support system), regardless of task context from the collaborative perspective. This effect reversed to a preference for technical systems when switching the perspective to being assessed. These findings illustrate an imperfect automation schema from the perspective of the advice-taker and demonstrate the importance of perspective when working with or being assessed by machine intelligence.
“…Swanepoel suggests that there are four of these: deliberative self-reflection, awareness of self in time, critical awareness of environment, and norm violation [4]. Other exemplary studies identify criteria useful for machine agency conceptualizations such as individuality, interactional asymmetry (being a source of activity) and normativity [5], goal-oriented activity contributing to the agent's own endurance or maintenance [6], intentionality and forethought [7], adaptive regulation, self-reactiveness and self-reflectiveness [6][7][8], and first-person sense of agency [9,10].…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.