2013
DOI: 10.1016/j.ijpsycho.2013.06.016
|View full text |Cite
|
Sign up to set email alerts
|

Visual information constrains early and late stages of spoken-word recognition in sentence context

Abstract: Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

10
25
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 28 publications
(38 citation statements)
references
References 49 publications
10
25
1
Order By: Relevance
“…This result replicated previous work using cross-modal verification tasks (e.g., Teder-Salejarvi et al, 2005;Dikker and Pylkkanen, 2011;Brunellière et al, 2013). Critically, however, such an effect was particularly prominent for incongruent pairs that had implausible content.…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…This result replicated previous work using cross-modal verification tasks (e.g., Teder-Salejarvi et al, 2005;Dikker and Pylkkanen, 2011;Brunellière et al, 2013). Critically, however, such an effect was particularly prominent for incongruent pairs that had implausible content.…”
Section: Discussionsupporting
confidence: 89%
“…Verification tasks also provide additional insights about the role of congruency on processing costs. Dikker and Pylkkanen (2011), for example, used a word-picture matching task, and demonstrated that when the content of a word does not completely match the content of a subsequently presented picture, a negative shift of brain activity is observed as early as 100 ms after picture onset (cf., Brunellière et al 2013) for corroborating evidences in spoken word recognition). Similar results are obtained with other cross-modal verification tasks when the congruency is manipulated between: (a) the source of an audio signal and its location in the visual context (i.e., left and right) (TederSalejarvi et al, 2005), or (b) the emotional valency of speech and an associated face expression (Pourtois et al, 2000).…”
Section: Introductionsupporting
confidence: 56%
“…The effect became manifest as more negative Aonly ERPs than for the AV-V difference waves, which is consistent with the ERPs observed by Brunellière, Sánchez-García, Ikumi, and Soto-Faraco (2013) for highly salient lip-read conditions, although these authors did not find this effect to be significant. Since Brunellière et al (2013) used ongoing AV speech sentences whereas we only presented lip-read information in the final syllable, it seems likely that the differences are related to the differences in experimental procedures. Possibly, insertion of a lip-read signal during ongoing auditory stimulation renders it highly unexpected for participants.…”
Section: Experiments 1: Discussionmentioning
confidence: 99%
“…There are a variety of electrophysiogical markers of audiovisual interactions in speech (e.g., Saint-Amour et al, 2007; Bernstein et al, 2008; Ponton et al, 2009; Arnal et al, 2011). Although these markers are not exclusive of audiovisual speech (Stekelenburg and Vroomen, 2007), they are thought to reflect important aspects of the speech perception process such as cross-modal prediction and phonological processing (Brunellière et al, 2013). …”
Section: Introductionmentioning
confidence: 99%