2013
DOI: 10.3389/fpsyg.2013.00388
|View full text |Cite
|
Sign up to set email alerts
|

Speech through ears and eyes: interfacing the senses with the supramodal brain

Abstract: The comprehension of auditory-visual (AV) speech integration has greatly benefited from recent advances in neurosciences and multisensory research. AV speech integration raises numerous questions relevant to the computational rules needed for binding information (within and across sensory modalities), the representational format in which speech information is encoded in the brain (e.g., auditory vs. articulatory), or how AV speech ultimately interfaces with the linguistic system. The following non-exhaustive r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

5
53
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(59 citation statements)
references
References 188 publications
(267 reference statements)
5
53
0
1
Order By: Relevance
“…Whereas alpha-band oscillations may be involved in speech analysis at the vowel level, the same mechanism may apply to other time-scales of analysis, such as theta band for syllables, (Luo and Poeppel, 2007) depending on the input and task. There is indeed evidence that different combinations of oscillatory frequencies can be entrained, depending on the context (Kösem and van Wassenhove, 2012; Schroeder et al, 2008; van Wassenhove, 2013). Perhaps the most intriguing example, albeit still speculative, is that of audiovisual speech [reviewed by (Giraud and Poeppel, 2012; Schroeder et al, 2008)].…”
Section: How Do Behavioral Goals Guide the Flexible Use Of Canonicmentioning
confidence: 99%
“…Whereas alpha-band oscillations may be involved in speech analysis at the vowel level, the same mechanism may apply to other time-scales of analysis, such as theta band for syllables, (Luo and Poeppel, 2007) depending on the input and task. There is indeed evidence that different combinations of oscillatory frequencies can be entrained, depending on the context (Kösem and van Wassenhove, 2012; Schroeder et al, 2008; van Wassenhove, 2013). Perhaps the most intriguing example, albeit still speculative, is that of audiovisual speech [reviewed by (Giraud and Poeppel, 2012; Schroeder et al, 2008)].…”
Section: How Do Behavioral Goals Guide the Flexible Use Of Canonicmentioning
confidence: 99%
“…Recent predictive coding models of perception suggest that rather than passively categorizing the bottom-up signal, observers make active predictions about what they are likely to hear (and see), and that perception is based on the difference between these predictions and the bottom-up signal (Clark, 2013; Friston, 2005; Kumar et al, 2011; Rao & Ballard, 1999; see McMurray & Jongman, 2011 and Kleinschmidt & Jaeger, for applications to speech perception). Visual speech information could play a crucial role in such predictive processes (Arnal & Giraud, 2012; van Wassenhove, 2013) because in many cases, preparatory gestures (e.g., closing the lips before a word initial /b/, raising the tongue before a /d/) are visible before any acoustic signal is produced (Chandrasekaran, Trubanova, Stillittano, Caplier, & Ghazanfar, 2009; Schwartz & Savariaux, 2014). Thus, for the listener, the visual speech signal could set up predictions about what is about to be heard.…”
Section: Introductionmentioning
confidence: 99%
“…Hence, the binding of auditory to visual speech information appeared to be affected by the prior predictability of the visual stimulus. In other words, when visual cues fail to provide relevant information, they appear to be weighted less during the early processing stages, perhaps even prior to the influence of top-down attention (e.g., Massaro, 1998;van Wassenhove, Grant, & Poeppel, 2005;van Wassenhove, 2013). …”
mentioning
confidence: 99%