Using MRI-guided off-line TMS, we targeted two areas implicated in biological motion processing: ventral premotor cortex (PMC) and posterior STS (pSTS), plus a control site (vertex). Participants performed a detection task on noise-masked point-light displays of human animations and scrambled versions of the same stimuli. Perceptual thresholds were determined individually. Performance was measured before and after 20 sec of continuous theta burst stimulation of PMC, pSTS, and control (each tested on different days). A matched nonbiological object motion task (detecting point-light displays of translating polygons) served as a further control. Data were analyzed within the signal detection framework. Sensitivity (d') significantly decreased after TMS of PMC. There was a marginally significant decline in d' after TMS of pSTS but not of control site. Criterion (response bias) was also significantly affected by TMS over PMC. Specifically, subjects made significantly more false alarms post-TMS of PMC. These effects were specific to biological motion and not found for the nonbiological control task. To summarize, we report that TMS over PMC reduces sensitivity to biological motion perception. Furthermore, pSTS and PMC may have distinct roles in biological motion processing as behavioral performance differs following TMS in each area. Only TMS over PMC led to a significant increase in false alarms, which was not found for other brain areas or for the control task. TMS of PMC may have interfered with refining judgments about biological motion perception, possibly because access to the perceiver's own motor representations was compromised.
Predictive mechanisms are essential to successfully interact with the environment and to compensate for delays in the transmission of neural signals. However, whether and how we predict multisensory action outcomes remains largely unknown. Here we investigated the existence of multisensory predictive mechanisms in a context where actions have outcomes in different modalities. During fMRI data acquisition auditory, visual and auditory-visual stimuli were presented in active and passive conditions. In the active condition, a self-initiated button press elicited the stimuli with variable short delays (0-417ms) between action and outcome, and participants had to detect the presence of a delay for auditory or visual outcome (task modality). In the passive condition, stimuli appeared automatically, and participants had to detect the number of stimulus modalities (unimodal/bimodal). For action consequences compared to identical but unpredictable control stimuli we observed suppression of the blood oxygen level depended (BOLD) response in a broad network including bilateral auditory and visual cortices. This effect was independent of task modality or stimulus modality and strongest for trials where no delay was detected (undetected
Action-feedback monitoring is essential to ensure meaningful interactions with the external world. This process involves generating efference copy-based sensory predictions and comparing these with the actual action-feedback. As neural correlates of comparator processes, previous fMRI studies have provided heterogeneous results, including the cerebellum, angular and middle temporal gyrus. However, these studies usually comprised only self-generated actions. Therefore, they might have induced not only action-based prediction errors, but also general sensory mismatch errors. Here, we aimed to disentangle these processes using a custom-made fMRI-compatible movement device, generating active and passive hand movements with identical sensory feedback. Online visual feedback of the hand was presented with a variable delay. Participants had to judge whether the feedback was delayed. Activity in the right cerebellum correlated more positively with delay in active than in passive trials. Interestingly, we also observed activation in the angular and middle temporal gyri, but across both active and passive conditions. This suggests that the cerebellum is a comparator area specific to voluntary action, whereas angular and middle temporal gyri seem to detect more general intersensory conflict. Correlations with behavior and cerebellar activity nevertheless suggest involvement of these temporoparietal areas in processing and awareness of temporal discrepancies in action-feedback monitoring.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Predicting the sensory consequences of our own actions contributes to efficient sensory processing and might help distinguish the consequences of self-versus externally generated actions. Previous research using unimodal stimuli has provided evidence for the existence of a forward model, which explains how such sensory predictions are generated and used to guide behavior. However, whether and how we predict multisensory action outcomes remains largely unknown. Here, we investigated this question in two behavioral experiments. In Experiment 1, we presented unimodal (visual or auditory) and bimodal (visual and auditory) sensory feedback with various delays after a self-initiated buttonpress. Participants had to report whether they detected a delay between their buttonpress and the stimulus in the predefined task modality. In Experiment 2, the sensory feedback and task were the same as in Experiment 1, but in half of the trials the action was externally generated. We observed enhanced delay detection for bimodal relative to unimodal trials, with better performance in general for actively generated actions. Furthermore, in the active condition, the bimodal advantage was largest when the stimulus in the task-irrelevant modality was not delayed-that is, when it was time-contiguous with the actionas compared to when both the task-relevant and task-irrelevant modalities were delayed. This specific enhancement for trials with a nondelayed task-irrelevant modality was absent in the passive condition. These results suggest that a forward model creates predictions for multiple modalities, and consequently contributes to multisensory interactions in the context of action.
Adaptation to delays between actions and sensory feedback is important for efficiently interacting with our environment. Adaptation may rely on predictions of action-feedback pairing (motor-sensory component), or predictions of tactile-proprioceptive sensation from the action and sensory feedback of the action (inter-sensory component). Reliability of temporal information might differ across sensory feedback modalities (e.g. auditory or visual), which in turn influences adaptation. Here, we investigated the role of motor-sensory and inter-sensory components on sensorimotor temporal recalibration for motor-auditory (button press-tone) and motor-visual (button press-Gabor patch) events. In the adaptation phase of the experiment, action-feedback pairs were presented with systematic temporal delays (0 ms or 150 ms). In the subsequent test phase, audio/visual feedback of the action were presented with variable delays. The participants were then asked whether they detected a delay. To disentangle motor-sensory from inter-sensory component, we varied movements (active button press or passive depression of button) at adaptation and test. Our results suggest that motor-auditory recalibration is mainly driven by the motor-sensory component, whereas motor-visual recalibration is mainly driven by the inter-sensory component. Recalibration transferred from vision to audition, but not from audition to vision. These results indicate that motor-sensory and inter-sensory components contribute to recalibration in a modality-dependent manner.
Forward models can predict sensory consequences of self-action, which is reflected by less neural processing for actively than passively generated sensory inputs (BOLD suppression effect). However, it remains open whether forward models take the identity of a moving body part into account when predicting the sensory consequences of an action. In the current study, fMRI was used to investigate the neural correlates of active and passive hand movements during which participants saw either an on-line display of their own hand or someone else's hand moving in accordance with their movement. Participants had to detect delays (0-417 ms) between their movement and the displays. Analyses revealed reduced activation in sensory areas and higher delay detection thresholds for active versus passive movements. Furthermore, there was increased activation in the hippocampus, the amygdala, and the middle temporal gyrus when someone else's hand was seen. Most importantly, in posterior parietal (angular gyrus and precuneus), frontal (middle, superior, and medial frontal gyrus), and temporal (middle temporal gyrus) regions, suppression for actively versus passively generated feedback was stronger when participants were viewing their own compared to someone else's hand. Our results suggest that forward models can take hand identity into account when predicting sensory action consequences. K E Y W O R D Sagency, fMRI, forward model, hand identity, hand movement, prediction, self-other
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.