In the Rubber Hand Illusion, the feeling of ownership of a rubber hand displaced from a participant's real occluded hand is evoked by synchronously stroking both hands with paintbrushes. A change of perceived finger location towards the rubber hand (proprioceptive drift) has been reported to correlate with this illusion. To measure the time course of proprioceptive drift during the Rubber Hand Illusion, we regularly interrupted stroking (performed by robot arms) to measure perceived finger location. Measurements were made by projecting a probe dot into the field of view (using a semi-transparent mirror) and asking participants if the dot is to the left or to the right of their invisible hand (Experiment 1) or to adjust the position of the dot to that of their invisible hand (Experiment 2). We varied both the measurement frequency (every 10 s, 40 s, 120 s) and the mode of stroking (synchronous, asynchronous, just vision). Surprisingly, with frequent measurements, proprioceptive drift occurs not only in the synchronous stroking condition but also in the two control conditions (asynchronous stroking, just vision). Proprioceptive drift in the synchronous stroking condition is never higher than in the just vision condition. Only continuous exposure to asynchronous stroking prevents proprioceptive drift and thus replicates the differences in drift reported in the literature. By contrast, complementary subjective ratings (questionnaire) show that the feeling of ownership requires synchronous stroking and is not present in the asynchronous stroking condition. Thus, subjective ratings and drift are dissociated. We conclude that different mechanisms of multisensory integration are responsible for proprioceptive drift and the feeling of ownership. Proprioceptive drift relies on visuoproprioceptive integration alone, a process that is inhibited by asynchronous stroking, the most common control condition in Rubber Hand Illusion experiments. This dissociation implies that conclusions about feelings of ownership cannot be drawn from measuring proprioceptive drift alone.
| The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like "intentionality", "rationality" or "mind". However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. We identify three conditions that a system must meet in order to be considered as a genuine agent: a) a system must define its own individuality, b) it must be the active source of activity in its environment (interactional asymmetry) and c) it must regulate this activity in relation to certain norms (normativity). We find that even minimal forms of proto-cellular systems can already provide a paradigmatic example of genuine agency. By abstracting away some specific details of minimal models of living agency we define the kind of organization that is capable to meet the required conditions for agency (which is not restricted to living organisms). On this basis, we define agency as an autonomous organization that adaptively regulates its coupling with its environment and contributes to sustaining itself as a consequence. We find that spatiality and temporality are the two fundamental domains in which agency spans at different scales. We conclude by giving an outlook to the road that lies ahead in the pursuit to understand, model and synthesis agents.KEYWORDS | Agency, individuality, interactional asymmetry, normativity, spatiality, temporality. AGENCY AS A DEPARTURE POINThe concept of agency plays a central role in contemporary cognitive science as a conceptual currency across different sub-disciplines (specially in embodied, situated and dynamical approaches- Brooks 1991, Beer 1995, Pfeifer & Scheier 1999. It owes this central role to its capacity to capture the notion of a behaving system while avoiding the endless discussions around alternative foundational terms such as "repres- entations", "intentions", "cognitive subject", "conscious being" or "mind". While an insect-like robot already seems to be a minimal instance of agency, the concept is open enough to also cover humans or even collective organizations. From the departure point of agency it is possible to envision a research program that proceeds from the bottom up, from the simplest embodied behavior, grounding higher level phenomena on increasingly complex forms of situated interactions and their underlying mechanisms. This program would be, furthermore, amenable to dynamical systems' modeling cutting across brain, body and world and integrating different levels of mechanistic organization into the same explanatory framework. This possibility has generated considerable enthusiasm and has come to renew some of the foundations of cognitive science (Beer 1995, Hendriks-Jansen 1996, Ch...
Humans combine redundant multisensory estimates into a coherent multimodal percept. Experiments in cue integration have shown for many modality pairs and perceptual tasks that multisensory information is fused in a statistically optimal manner: observers take the unimodal sensory reliability into consideration when performing perceptual judgments. They combine the senses according to the rules of Maximum Likelihood Estimation to maximize overall perceptual precision. This tutorial explains in an accessible manner how to design optimal cue integration experiments and how to analyse the results from these experiments to test whether humans follow the predictions of the optimal cue integration model. The tutorial is meant for novices in multisensory integration and requires very little training in formal models and psychophysical methods. For each step in the experimental design and analysis, rules of thumb and practical examples are provided. We also publish Matlab code for an example experiment on cue integration and a Matlab toolbox for data analysis that accompanies the tutorial online. This way, readers can learn about the techniques by trying them out themselves. We hope to provide readers with the tools necessary to design their own experiments on optimal cue integration and enable them to take part in explaining when, why and how humans combine multisensory information optimally.
A difference in skin temperature between the hands has been identified as a physiological correlate of the rubber hand illusion (RHI). The RHI is an illusion of body ownership, where participants perceive body ownership over a rubber hand if they see it being stroked in synchrony with their own occluded hand. The current study set out to replicate this result, i.e., psychologically induced cooling of the stimulated hand using an automated stroking paradigm, where stimulation was delivered by a robot arm (PHANToMTM force-feedback device). After we found no evidence for hand cooling in two experiments using this automated procedure, we reverted to a manual stroking paradigm, which is closer to the one employed in the study that first produced this effect. With this procedure, we observed a relative cooling of the stimulated hand in both the experimental and the control condition. The subjective experience of ownership, as rated by the participants, by contrast, was strictly linked to synchronous stroking in all three experiments. This implies that hand-cooling is not a strict correlate of the subjective feeling of hand ownership in the RHI. Factors associated with the differences between the two designs (differences in pressure of tactile stimulation, presence of another person) that were thus far considered irrelevant to the RHI appear to play a role in bringing about this temperature effect.
In case of delayed visual feedback during visuomotor tasks, like in some sluggish computer games, humans can modulate their behavior to compensate for the delay. However, opinions on the nature of this compensation diverge. Some studies suggest that humans adapt to feedback delays with lasting changes in motor behavior (aftereffects) and a recalibration of time perception. Other studies have shown little or no evidence for such semipermanent recalibration in the temporal domain. We hypothesize that predictability of the reference signal (target to be tracked) is necessary for semipermanent delay adaptation. To test this hypothesis, we trained participants with a 200 ms visual feedback delay in a visually guided manual tracking task, varying the predictability of the reference signal between conditions, but keeping reference motion and feedback delay constant. In Experiment 1, we focused on motor behavior. Only training in the predictable condition brings about all of the adaptive changes and aftereffects expected from delay adaptation. In Experiment 2, we used a synchronization task to investigate perceived simultaneity (perceptuomotor learning). Supporting the hypothesis, participants recalibrated subjective visuomotor simultaneity only when trained in the predictable condition. Such a shift in perceived simultaneity was also observed in Experiment 3, using an interval estimation task. These results show that delay adaptation in motor control can modulate the perceived temporal alignment of vision and kinesthetically sensed movement. The coadaptation of motor prediction and target prediction (reference extrapolation) seems necessary for such genuine delay adaptation. This offers an explanation for divergent results in the literature.
Figure 1: Left: The basic virtual mirror scenario consists of an empty room and a simplistic mirror avatar. Right: The extended scenario employed in the experiment, where the target movement is shown by a semi-transparent blue "ghost character". AbstractLatency between a user's movement and visual feedback is inevitable in every Virtual Reality application, as signal transmission and processing take time. Unfortunately, a high end-to-end latency impairs perception and motor performance. While it is possible to reduce feedback delay to tens of milliseconds, these delays will never completely vanish. Currently, there is a gap in literature regarding the impact of feedback delays on perception and motor performance as well as on their interplay in virtual environments employing full-body avatars. With the present study at hand, we address this gap by performing a systematic investigation of different levels of delay across a variety of perceptual and motor tasks during full-body action inside a Cave Automatic Virtual Environment. We presented participants with their virtual mirror image, which responded to their actions with feedback delays ranging from 45 to 350 ms. We measured the impact of these delays on motor performance, sense of agency, sense of body ownership and simultaneity perception by means of psychophysical procedures. Furthermore, we looked at interaction effects between these aspects to identify possible dependencies. The results show that motor performance and simultaneity perception are affected by latencies above 75 ms. Although sense of agency and body ownership only decline at a latency higher than 125 ms, and deteriorate for a latency greater than 300 ms, they do not break down completely even at the highest tested delay. Interestingly, participants perceptually infer the presence of delays more from their motor error in the task than from the actual level of delay. Whether or not participants notice a delay in a virtual environment might therefore depend on the motor task and their performance rather than on the actual delay.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.