2019
DOI: 10.1162/jocn_a_01398
|View full text |Cite
|
Sign up to set email alerts
|

Presentation Probability of Visual–Auditory Pairs Modulates Visually Induced Auditory Predictions

Abstract: Predictions about forthcoming auditory events can be established on the basis of preceding visual information. Sounds being incongruent to predictive visual information have been found to elicit an enhanced negative ERP in the latency range of the auditory N1 compared with physically identical sounds being preceded by congruent visual information. This so-called incongruency response (IR) is interpreted as reduced prediction error for predicted sounds at a sensory level. The main purpose of this study was to e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

5
35
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(41 citation statements)
references
References 45 publications
5
35
1
Order By: Relevance
“…In the latter condition, the probability of, for instance, the high visual-cue-sound combination was increased relative to the low visual-cue-sound combination, leading to an overall presentation frequency of 83% high versus 17% low sounds. Stuckenberg et al (2019) only observed an IR in this specific condition, when one visual-cue-sound combination was presented more frequently than the other (83/17 condition), whereas no IR was observed when high and low visual-cuesound combinations were presented with equal probability (50/50 condition). They conclude that the increased global probability of one visual-cue-sound combination is a prerequisite for the formation of a strong bimodal association.…”
mentioning
confidence: 70%
See 3 more Smart Citations
“…In the latter condition, the probability of, for instance, the high visual-cue-sound combination was increased relative to the low visual-cue-sound combination, leading to an overall presentation frequency of 83% high versus 17% low sounds. Stuckenberg et al (2019) only observed an IR in this specific condition, when one visual-cue-sound combination was presented more frequently than the other (83/17 condition), whereas no IR was observed when high and low visual-cuesound combinations were presented with equal probability (50/50 condition). They conclude that the increased global probability of one visual-cue-sound combination is a prerequisite for the formation of a strong bimodal association.…”
mentioning
confidence: 70%
“…Given these contradicting findings (Stuckenberg et al, 2019vs. Widmann et al, 2004 and the literature on short-latency components (Wacongne et al, 2011), the present study aims to better understand the role of local repetition on the elicitation of the IR.…”
mentioning
confidence: 96%
See 2 more Smart Citations
“…The Bayes factor (BF 10 ) was calculated using 10.000 Monte-Carlo sampling iterations; the null hypothesis corresponded to a standardized effect size δ = 0, while the alternative hypothesis was defined as a Cauchy prior distribution centred around 0 with a scaling factor of r = 0.707 (Rouder, Morey, Speckman, & Province, 2012). In line with the Bayes factor interpretation (Jeffreys, 1961;Lee & Wagenmakers, 2013) and with previous studies reporting Bayes factors (Korka et al, 2019;Marzecová et al, 2018;Stuckenberg, Schröger, & Widmann, 2019), data were taken as moderate evidence for the alternative (or null) hypothesis if the BF 10 was greater than 3 (or lower than 0.33), while values close to 1 were considered only weakly informative. Values greater than 10 (or smaller than 0.1) were considered strong evidence for the alternative (or null) hypothesis.…”
Section: Discussionmentioning
confidence: 98%