2011
DOI: 10.1038/nn.2810
|View full text |Cite
|
Sign up to set email alerts
|

Transitions in neural oscillations reflect prediction errors generated in audiovisual speech

Abstract: According to the predictive coding theory, top-down predictions are conveyed by backward connections and prediction errors are propagated forward across the cortical hierarchy. Using MEG in humans, we show that violating multisensory predictions causes a fundamental and qualitative change in both the frequency and spatial distribution of cortical activity. When visual speech input correctly predicted auditory speech signals, a slow delta regime (3-4 Hz) developed in higher-order speech areas. In contrast, when… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

52
267
2

Year Published

2012
2012
2018
2018

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 302 publications
(321 citation statements)
references
References 50 publications
52
267
2
Order By: Relevance
“…Brain activity in early auditory cortex in fusion trials in which participants do give fused responses, is very similar to that observed upon congruent token presentations (Kislyuk, M€ ott€ onen, & Sams, 2008) with differences appearing elsewhere (Erickson et al, 2014). Arnal et al (2011) compared congruent and incongruent non-McGurk AV speech tokens, leading only to combination percepts when incongruent, and found different patterns of brain activation. The visual input showed two kinds of effects on the perceptual process: an early nonspecific effect on auditory cortex and a token-specific effect related to the informative content of the visual modality about the consonant.…”
supporting
confidence: 57%
See 2 more Smart Citations
“…Brain activity in early auditory cortex in fusion trials in which participants do give fused responses, is very similar to that observed upon congruent token presentations (Kislyuk, M€ ott€ onen, & Sams, 2008) with differences appearing elsewhere (Erickson et al, 2014). Arnal et al (2011) compared congruent and incongruent non-McGurk AV speech tokens, leading only to combination percepts when incongruent, and found different patterns of brain activation. The visual input showed two kinds of effects on the perceptual process: an early nonspecific effect on auditory cortex and a token-specific effect related to the informative content of the visual modality about the consonant.…”
supporting
confidence: 57%
“…Another difference is our introduction of topedown predictive signals that bias activity at the unimodal level towards patterns compatible with existing representations at the top level. Therefore, our model does not process modalities independently; the processing in one modality influences the processing in the other one, as suggested by electrophysiological data (Arnal, Wyart, & Giraud, 2011;Van Wassenhove et al, 2005.…”
Section: Discussionmentioning
confidence: 88%
See 1 more Smart Citation
“…As noted by Friston et al (2005), the closely similar neuronal architecture of cortical layers throughout the cerebral cortex supports the view that a similar computational principle of predictive coding may apply to the multiple hierarchical levels of the cortical areas of the brain. Thus, our model may be used to account for higher-order instances of mismatch responses, such as the distinct MMNs evoked by a change in phoneme versus speaker (Giard et al, 1995;Dehaene-Lambertz, 1997), or the mismatch responses observed outside the auditory modality, either in visual (Tales et al, 1999;Pazo-Alvarez et al, 2003), olfactive (Krauel et al, 1999;Pause and Krauel, 2000), and somatosensory (Kekoni et al, 1997;Shinozaki et al, 1998) modalities or even in a crossmodal context (Arnal et al, 2011).…”
Section: Extensions and Limits Of The Modelmentioning
confidence: 99%
“…Neural synchronizations in higher frequency ranges (e.g. gamma range, N 40 Hz) provide a reliable index of feature binding within and across sensory modalities (Arnal et al, 2011;Engel et al, 1991;Roelfsema et al, 1997;Senkowski et al, 2008;Tallon-Baudry and Bertrand, 1999). In multisensory integration, low-frequency neural oscillations (delta, 1-2 Hz) play a crucial role in the temporal selection (Besle et al, 2011;Fiebelkorn et al, 2013;Gomez-Ramirez et al, 2011;Lakatos et al, 2008;Schroeder and Lakatos, 2009) and in the integration of AV information (Fiebelkorn et al, 2011;Kösem and van Wassenhove, 2012;Luo et al, 2010).…”
Section: Neural Oscillations: Multiplex Encoding Of Informationmentioning
confidence: 99%