2018
DOI: 10.1111/nyas.13615
|View full text |Cite
|
Sign up to set email alerts
|

Causal inference and temporal predictions in audiovisual perception of speech and music

Abstract: To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mecha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
25
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 29 publications
(25 citation statements)
references
References 100 publications
0
25
0
Order By: Relevance
“…When integrating audiovisual information, we rely on the congruence between auditory and visual cues in terms of their object identity, their spatial location, or their temporal synchrony (Noppeney and Lee 2018). To date, while previous studies have used functional magnetic resonance imaging (fMRI) to investigate the anatomical sites of audiovisual interactions (Gau and Noppeney 2016;Werner and Noppeney 2010), it is still unclear how congruent auditory input modulates the time course of visual object processing.…”
Section: Introductionmentioning
confidence: 99%
“…When integrating audiovisual information, we rely on the congruence between auditory and visual cues in terms of their object identity, their spatial location, or their temporal synchrony (Noppeney and Lee 2018). To date, while previous studies have used functional magnetic resonance imaging (fMRI) to investigate the anatomical sites of audiovisual interactions (Gau and Noppeney 2016;Werner and Noppeney 2010), it is still unclear how congruent auditory input modulates the time course of visual object processing.…”
Section: Introductionmentioning
confidence: 99%
“…When integrating audiovisual information, we rely on the congruence between auditory and visual cues in terms of their object identity, their spatial location, or their temporal synchrony (Noppeney and Lee 2018). To date, while previous studies have used functional magnetic resonance imaging (fMRI) to investigate the anatomical sites of audiovisual interactions (Gau and Noppeney 2016;Werner and Noppeney 2010), it is still unclear how congruent auditory input modulates the time course of visual object processing.…”
Section: Introductionmentioning
confidence: 99%
“…Previous research showed subtle differences between musicians and non-musicians in AV perception (Musacchia et al, 2008; Lee and Noppeney, 2011; Paraskevopoulos et al, 2012; Proverbio et al, 2016). For example, a perceiver's prior expectations can influence temporal integration, and relevant training can generate more precise temporal predictions (Noppeney and Lee, 2018) leading to higher sensitivity to AV misalignments (for behavioral study see, Behne et al, 2013, for EEG study see, Behne et al, 2017). Another study (Petrini et al, 2009a) has shown that while musicians have a more refined integration window for AV music perception compared to non-musicians, they are also more accurate at predicting an upcoming sound when the visual information is missing in AV music perception (Petrini et al, 2009b).…”
Section: Discussionmentioning
confidence: 99%