2020
DOI: 10.1038/s41598-020-75201-7
|View full text |Cite
|
Sign up to set email alerts
|

Audio-visual combination of syllables involves time-sensitive dynamics following from fusion failure

Abstract: In face-to-face communication, audio-visual (AV) stimuli can be fused, combined or perceived as mismatching. While the left superior temporal sulcus (STS) is presumably the locus of AV integration, the process leading to combination is unknown. Based on previous modelling work, we hypothesize that combination results from a complex dynamic originating in a failure to integrate AV inputs, followed by a reconstruction of the most plausible AV sequence. In two different behavioural tasks and one MEG experiment, w… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 78 publications
(94 reference statements)
0
3
0
Order By: Relevance
“…As a classic multi-sensory integration area, the pSTG has been proved, in many studies on functional neuroimaging, to be associated with audiovisual speech integration (Daniel et al 2004 ; Michael et al 2004 ). A recent DCM study has shown that STS could receive and reorder the speech and then determined the multimodal syllable representation (Bouton et al 2020 ). In contrast, PrG has been adopted as the top-down modulator to facilitate audiovisual speech comprehension and multisensory integration (Choi et al 2018 ; Park et al 2018 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As a classic multi-sensory integration area, the pSTG has been proved, in many studies on functional neuroimaging, to be associated with audiovisual speech integration (Daniel et al 2004 ; Michael et al 2004 ). A recent DCM study has shown that STS could receive and reorder the speech and then determined the multimodal syllable representation (Bouton et al 2020 ). In contrast, PrG has been adopted as the top-down modulator to facilitate audiovisual speech comprehension and multisensory integration (Choi et al 2018 ; Park et al 2018 ).…”
Section: Discussionmentioning
confidence: 99%
“…A DCM study has shown that bidirectional connection between premotor cortex and STS and that between planum temporal and premotor cortex are significant during the speech perception, supporting the involvement of premotor cortex (Osnes et al 2011 ). Besides, a study about the STS effective connectivity signature has suggested that the integration outcome of audiovisual speech primarily depends on whether the STS converges onto a multimodal syllable representation (Bouton et al 2020 ). In general, the available evidence suggests both the effects of indirect and bidirectional influences of the STG/S and motor cortex on sensory processes during speech perception.…”
Section: Introductionmentioning
confidence: 99%
“…This interpretation is consistent with previous reports of increased response latency for McGurk trials [ 119 ], though this assertion is made cautiously for multiple reasons. First, increased response latency for McGurk stimuli is not universally reported [ 120 ], with variability across studies potentially rooted in the inclusion/absence of McGurk trials not eliciting sensory fusion (i.e., ‘unfused’ McGurk trials). Second, the design of the current study, in which subjects were cued to respond at a given time (i.e., 1500 ms post-stimulus) precludes meaningful interpretation of response latencies.…”
Section: Discussionmentioning
confidence: 99%