2007
DOI: 10.3758/bf03193776
|View full text |Cite
|
Sign up to set email alerts
|

Crossmodal binding: Evaluating the “unity assumption” using audiovisual speech stimuli

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

26
252
2
4

Year Published

2009
2009
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 263 publications
(284 citation statements)
references
References 59 publications
(95 reference statements)
26
252
2
4
Order By: Relevance
“…The influence of such expectations has been demonstrated, e.g. by more erroneous temporal-order judgements on gender matching than mismatching AV speech clips (Vatakis and Spence 2007). Multisensory speech congruence detection might be particularly sensitive to expectations, as suggested by results indicating that the McGurk illusion does not occur if participants interpret the sounds as noise (Tuomainen et al 2005; Figure 1B-C).…”
Section: Role Of the Observermentioning
confidence: 86%
See 1 more Smart Citation
“…The influence of such expectations has been demonstrated, e.g. by more erroneous temporal-order judgements on gender matching than mismatching AV speech clips (Vatakis and Spence 2007). Multisensory speech congruence detection might be particularly sensitive to expectations, as suggested by results indicating that the McGurk illusion does not occur if participants interpret the sounds as noise (Tuomainen et al 2005; Figure 1B-C).…”
Section: Role Of the Observermentioning
confidence: 86%
“…van Atteveldt et al 2007). These processing enhancements might likewise arise from the expectationsthat congruent crossmodal signals likely share a common source (Vatakis and Spence 2007). The co-activation and expectation mechanisms are not mutually exclusive ( Figure 1A).Notably, the detection of congruence across certain perceptual features of multisensory stimuli might sometimes have a more hardwired nature, which is possibly based on the properties of receptive fields of multisensory neurons ( Figure 1A).…”
Section: Stimulus-based Effectsmentioning
confidence: 99%
“…When judging the temporal order of the individual components of facevoice pairings, performance is worse when the genders of faces and voices agree. Vatakis and Spence (2007) explain that weakened performance indicates integration, which inhibits comparative analysis of the individual components. This identity cue does not seem to be triggered under all conditions.…”
Section: Causality and Unitymentioning
confidence: 99%
“…There is evidence for an effect of semantic congruency, which supports the unity assumption, using speech stimuli. For example, Vatakis and Spence (2007) found that participants were worse at making temporal order judgments between sound and vision for speech video when the gender was matched across modalities as opposed to when one was male and one female. They interpreted this finding to be due to the mismatched stimuli being less susceptible to temporal ventriloquism since they were less likely to originate from the same source.…”
Section: Discussionmentioning
confidence: 99%