2013
DOI: 10.1016/j.bandl.2013.05.009
|View full text |Cite
|
Sign up to set email alerts
|

Neural correlates of audiovisual speech processing in a second language

Abstract: Abbreviations: L1 = native language, L2 = non-native language, pSTS = posterior superior temporal sulcus, STG = superior temporal gyrus, AV = audiovisual, AVc = audiovisual congruent, AVi = audiovisual incongruent, A = auditory, V = visual, B = baseline, MSI = multisensory interaction 2 AbstractNeuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
8
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 54 publications
(86 reference statements)
3
8
0
Order By: Relevance
“…From a functional perspective, our results are in keeping with the notion that during speech perception, the auditory and visual sensory systems interact at multiple levels of processing (Schwartz et al, 1998; Nahorna et al, 2012; Barrós-Loscertales et al, 2013), and that top-down modulatory signals can influence at least some of these levels. Multisensory links do not solely rely on feed-forward convergence from unisensory regions to multisensory brain areas, but also implicate back-projections from association areas to multiple levels of (early) sensory processing that are based on current task demands (Calvert et al, 1999, 2000; Macaluso et al, 2000; Friston, 2005; Driver and Noesselt, 2008).…”
Section: Discussionsupporting
confidence: 89%
“…From a functional perspective, our results are in keeping with the notion that during speech perception, the auditory and visual sensory systems interact at multiple levels of processing (Schwartz et al, 1998; Nahorna et al, 2012; Barrós-Loscertales et al, 2013), and that top-down modulatory signals can influence at least some of these levels. Multisensory links do not solely rely on feed-forward convergence from unisensory regions to multisensory brain areas, but also implicate back-projections from association areas to multiple levels of (early) sensory processing that are based on current task demands (Calvert et al, 1999, 2000; Macaluso et al, 2000; Friston, 2005; Driver and Noesselt, 2008).…”
Section: Discussionsupporting
confidence: 89%
“…In a recent example, Man et al (2012) demonstrated similar neural activity patterns in the left pSTS for non-speech visual-only representation and acoustic-only representation of the same object. Supporting our findings, the left pSTS has been consistently recruited in AV language studies using the max criterion for AV integration (conjunction of AV > A O and AV > V O ; Beauchamp, 2005b) of congruent AV stimuli including various stimulus types, such as sentences in native and non-native language (Barros-Loscertales et al, 2013), words (Szycik et al, 2008), and visual letters paired with speech sounds (van Atteveldt et al, 2004, 2007). Similarly, the left pSTS showed increased activity to congruent AV story stimuli compared to the sum of activity for acoustic-only and visual-only stimulation (Calvert et al, 2000); others have also reported supra-additive AV speech effects in STS (Wright et al, 2003).…”
Section: Discussionsupporting
confidence: 83%
“…We suggest that this method allowed for the isolation of AV-processing regions most likely to be involved in processing congruent AV speech or the change in perception accompanying the McGurk effect. This statistical approach has been successfully utilized to isolate AV-processing regions in several language studies (van Atteveldt et al, 2004, 2007; Szycik et al, 2008; Barros-Loscertales et al, 2013) and other types of AV studies (Beauchamp, 2005b; Hein et al, 2007; Watson et al, 2014). Since others have raised the issue of high individual anatomical/functional variability concerning the multisensory portion of the STS (Beauchamp et al, 2010; Nath and Beauchamp, 2012), we confirmed our group results in single-subject analyses, accounting for individual differences in gyral anatomy (Geschwind and Levitsky, 1968) and functional localization within pST.…”
Section: Introductionmentioning
confidence: 99%
“…Others have proposed that many more sensory areas than previously assumed may have multimodal properties (Driver and Noesselt, 2008; Ghazanfar and Schroeder, 2006; Hackett and Schroeder, 2009), and previous studies have shown plasticity of sensory areas in blind or deaf individuals (Amedi et al, 2003; Amedi et al, 2007; Bavelier and Neville, 2002; Bedny et al, 2011; Finney et al, 2001; Rauschecker, 1995; Renier et al, 2010; Striem-Amit and Amedi, 2014; Striem-Amit et al, 2012; Weeks et al, 2000). A recent study of non-native, second language processing recruited bilateral occipital cortex during congruent versus incongruent stimulation of AV sentences (Barros-Loscertales et al, 2013). Other studies have shown FFG activation in voice/speaker recognition tasks of auditory-only speech (von Kriegstein et al, 2005), and FFG recruitment during face processing (Haxby et al, 2000; Hoffman and Haxby, 2000).…”
Section: Discussionmentioning
confidence: 99%