2014
DOI: 10.1016/j.neuroimage.2014.01.003
|View full text |Cite
|
Sign up to set email alerts
|

Action planning and predictive coding when speaking

Abstract: Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

13
95
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 75 publications
(108 citation statements)
references
References 63 publications
13
95
0
Order By: Relevance
“…The P2 component was distributed more posteriorly with stronger activity over the central electrodes and an inversion over the bilateral temporo-parietal areas. Based on the findings from previous studies (Butler and Trainor, 2012; Wang et al, 2014), we suggest that the N1 responses arise from primary and secondary cortical auditory areas, and their modulation in this study is a neurophysiological index of sensory feedback suppression in response to temporally-predictable pitch-shift stimuli during vocal production and motor control. Less details are known about the neural generators of the P2 component, but its longer latency suggests that this neural response component reflect higher-level sensory-motor and cognitive processes and may receive contribution from multiple sources within sensory, motor and frontal cortices.…”
Section: Discussionsupporting
confidence: 75%
See 2 more Smart Citations
“…The P2 component was distributed more posteriorly with stronger activity over the central electrodes and an inversion over the bilateral temporo-parietal areas. Based on the findings from previous studies (Butler and Trainor, 2012; Wang et al, 2014), we suggest that the N1 responses arise from primary and secondary cortical auditory areas, and their modulation in this study is a neurophysiological index of sensory feedback suppression in response to temporally-predictable pitch-shift stimuli during vocal production and motor control. Less details are known about the neural generators of the P2 component, but its longer latency suggests that this neural response component reflect higher-level sensory-motor and cognitive processes and may receive contribution from multiple sources within sensory, motor and frontal cortices.…”
Section: Discussionsupporting
confidence: 75%
“…In addition, these findings suggest that exposure to repeated presentations of predictable stimuli results in the increased contribution of feedforward mechanisms during vocal motor control. This reasoning supports the framework for predictions by the internal forward model: learned predictions result in more accurate efference copies and, consequently, a decreased mismatch in sensory feedback (Chen et al, 2012; Korzyukov et al, 2012; Scheerer and Jones, 2014; Wang et al, 2014; Wolpert and Flanagan, 2001). …”
Section: Introductionsupporting
confidence: 74%
See 1 more Smart Citation
“…In men, higher AVP was associated with less connectedness and efficiency in left IFG (pars triangularis) and left STG, two important regions for verbal abilities (Costafreda et al 2006; Wagner et al 2014; Wang et al 2014). Consistent with this idea is that higher connectedeness and efficiency in left pars triangularis was associated with better verbal fluency in men.…”
Section: Discussionmentioning
confidence: 97%
“…For example, visual predictions have often been studied using joysticks controlling a cursor, or natural feedback of the hand using a camera, with either temporal or spatial deviations of the feedback (Farrer, Bouchereau, Jeannerod, & Franck, 2008;Hoover & Harris, 2012;Leube et al, 2003). Actively producing speech and listening to recorded speech stimuli have been used to assess auditory prediction errors (Ford, Gray, Faustman, Heinks, & Mathalon, 2005;Wang et al, 2014). Finally, tactile stimuli with various delays have been used to investigate predictive mechanisms in the tactile domain (e.g., Blakemore, Wolpert, & Frith, 1998).…”
mentioning
confidence: 99%