2021
DOI: 10.1111/psyp.13811
|View full text |Cite
|
Sign up to set email alerts
|

Cross‐modal predictive processing depends on context rather than local contingencies

Abstract: Visual information may influence the processing of auditory information, as illustrated by phenomena like the ventriloquist illusion (e.g., Alais & Burr, 2004), the McGurk effect (McGurk & MacDonald, 1976), or cross-modal spatial attention effects (e.g., Eimer & Driver, 2001). One way in which visual information can influence the processing of auditory information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 39 publications
(102 reference statements)
1
3
0
Order By: Relevance
“…In other words, these two processing systems may not be organized in a modular fashion in a situation where the intention-based prediction system is in charge. From a more general view, this perspective is in line with studies showing that context is highly relevant for modulations of early auditory processing (e.g., Dercksen et al, 2021); and, vice-versa, the execution of a simple action (e.g., a right button-press) depends on the specific context, for example, whether the button-press denotes a "yes" or a "no"-answer (Aberbach-Goodman et al, 2022).…”
Section: Strong Impact Of Action Intention On Mismatch Negativity Whe...supporting
confidence: 74%
“…In other words, these two processing systems may not be organized in a modular fashion in a situation where the intention-based prediction system is in charge. From a more general view, this perspective is in line with studies showing that context is highly relevant for modulations of early auditory processing (e.g., Dercksen et al, 2021); and, vice-versa, the execution of a simple action (e.g., a right button-press) depends on the specific context, for example, whether the button-press denotes a "yes" or a "no"-answer (Aberbach-Goodman et al, 2022).…”
Section: Strong Impact Of Action Intention On Mismatch Negativity Whe...supporting
confidence: 74%
“…Second, there is evidence for cross-modal prediction for spoken languages ( Sánchez-García et al, 2011 , 2013 ). Third, the cross-modal prediction has also been found for other non-linguistic cognitive domains, such as perception of emotions ( Jessen and Kotz, 2013 ) and music ( Dercksen et al, 2021 ). Given this evidence for cross-modal interactions in both linguistic and non-linguistic domains, further studies might investigate whether bimodal bilinguals make cross-modal linguistic predictions.…”
Section: Discussionmentioning
confidence: 91%
“…Studies in the field typically use d prime as an index of accuracy, which identifies the signal detection but cannot attribute the difference on either of the two conditions. Nonetheless, congruent processing relies mostly on predictive coding principles [ 58 ]. As such, the audiovisual stimuli in the congruent condition does not violate the underlying predictions, and hence, engages predominantly top-down mechanisms, which are not explicitly trained in musicians.…”
Section: Discussionmentioning
confidence: 99%