2018
DOI: 10.1101/480590
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation

Abstract: Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon.Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
2
2

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 77 publications
1
6
0
Order By: Relevance
“…Regarding goal encoding, the possible description of this type of information might allow to draw inferences on the specific role of premotor, parietal and temporal nodes of the network. We expect to show goal encoding within regions of the IPL (SMG, aIPS), as reported in recent MVPA studies (Gallivan et al 2013b ; Chen et al 2016 , 2018 ; Turella et al 2020 ; Monaco et al 2020 ).…”
Section: Introductionsupporting
confidence: 77%
See 2 more Smart Citations
“…Regarding goal encoding, the possible description of this type of information might allow to draw inferences on the specific role of premotor, parietal and temporal nodes of the network. We expect to show goal encoding within regions of the IPL (SMG, aIPS), as reported in recent MVPA studies (Gallivan et al 2013b ; Chen et al 2016 , 2018 ; Turella et al 2020 ; Monaco et al 2020 ).…”
Section: Introductionsupporting
confidence: 77%
“…The parameters for data acquisition were similar to previous published work of our lab (Monaco et al 2019 , 2020 ; Turella et al 2020 ). All the MR data were acquired with a 4 T Bruker MedSpec scanner using an 8-channel head coil.…”
Section: Methodsmentioning
confidence: 76%
See 1 more Smart Citation
“…We included these studies in the Vision condition because the contrast did not subtract the visual processing of the grasping hand however, the inclusion of contrasts subtracting some activity related to visual processing might have reduced the sensitivity to detect activation in early visual areas. Also, while univariate analysis might lack the sensitivity to reveal activation in ventral stream areas and early visual cortex under lack of vision, recent multivoxel pattern analysis studies have shown different representations for grasping and reaching action planning with and without visual information in ventral stream and early visual areas (Monaco et al, 2019, 2021). This difference in results indicates that univariate and multivariate analysis provide complementary and not necessarily equivalent information, with MVPA being more sensitive to distributed representation of information of content, and univariate analysis showing more sensitivity to the overall engagement in a task (Coutanche, 2013; Davis et al, 2014; Jimura & Poldrack, 2012).…”
Section: Discussionmentioning
confidence: 99%
“…Picking up a pen, for example, would be more successful when one is focused on its orientation rather than its color. Considerable research has investigated the role of fronto-parietal reaching and grasping networks in successfully executing actions (for reviews see: Vesia and Crawford 2012;Gallivan and Culham 2015), and multivoxel pattern analysis has allowed examining the representation of action intention in fronto-parietal and temporaloccipital cortices seconds before participants start to move (Gallivan et al 2011;Gallivan, Chapman, et al 2013;Monaco et al 2019). Action planning strongly relies on the representation of our surrounding for generating accurate and effective movements, and at the same time, it enhances the detection of features that are relevant for behaviour (Gutteling et al 2011).…”
Section: Introductionmentioning
confidence: 99%