2013
DOI: 10.1371/journal.pone.0080265
|View full text |Cite|
|
Sign up to set email alerts
|

Effective Cerebral Connectivity during Silent Speech Reading Revealed by Functional Magnetic Resonance Imaging

Abstract: Seeing the articulatory gestures of the speaker (“speech reading”) enhances speech perception especially in noisy conditions. Recent neuroimaging studies tentatively suggest that speech reading activates speech motor system, which then influences superior-posterior temporal lobe auditory areas via an efference copy. Here, nineteen healthy volunteers were presented with silent videoclips of a person articulating Finnish vowels /a/, /i/ (non-targets), and /o/ (targets) during event-related functional magnetic re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
15
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 60 publications
(59 reference statements)
1
15
0
Order By: Relevance
“…It has been found that the left STG receives input from the orthographic region (e.g. the left fusiform gyrus) and passes information on to more anterior language regions, such as the inferior frontal gyrus in English monolingual speakers (Bitan et al, 2005;Chu et al, 2013;Simos, Rezaie, Fletcher, & Papanicolaou, 2013), and the strength of this connectivity between left STG and left fusiform gyrus is greater in older children than in younger children (Bitan, Cheon, Lu, Burman, Booth, 2009;Booth, Mehdiratta, Burman, & Bitan, 2008). Our study suggests that the left STG also receives inputs from the right visuo-orthographic area.…”
Section: Discussionsupporting
confidence: 49%
“…It has been found that the left STG receives input from the orthographic region (e.g. the left fusiform gyrus) and passes information on to more anterior language regions, such as the inferior frontal gyrus in English monolingual speakers (Bitan et al, 2005;Chu et al, 2013;Simos, Rezaie, Fletcher, & Papanicolaou, 2013), and the strength of this connectivity between left STG and left fusiform gyrus is greater in older children than in younger children (Bitan, Cheon, Lu, Burman, Booth, 2009;Booth, Mehdiratta, Burman, & Bitan, 2008). Our study suggests that the left STG also receives inputs from the right visuo-orthographic area.…”
Section: Discussionsupporting
confidence: 49%
“…In the present study, we functionally defined the face-sensitive FFA, OFA, and the face-movement sensitive portion of the pSTS/STG (i.e., TVSA) using an independent functional localizer and functional probabilistic maps for face-sensitive regions. Chu et al (2013) reported functional connectivity between regions labeled as the lateral posterior fusiform gyrus and the posterior STG by comparing visual-speech perception to a baseline condition. Similarly, the posterior STS/STG is responsive to visual (e.g., Beauchamp, Lee, Haxby, & Martin, 2002;Grossman et al, 2000), auditory (e.g., Fecteau, Armony, Joanette, & Belin, 2004; von Kriegstein & Giraud, 2004), and audio-visual stimuli (e.g., Beauchamp, Argall, Bodurka, Duyn, & Martin, 2004;Wright, Pelphrey, Allison, McKeown, & McCarthy, 2003).…”
Section: Discussionmentioning
confidence: 99%
“…These studies demonstrated functional connectivity between dorsal-movement regions and other visual-speech related regions (Borowiak et al, 2018;Chu et al, 2013). However, neither study aimed to specifically investigate network connectivity between the dorsal-movement and the ventral-form regions.…”
Section: Discussionmentioning
confidence: 99%
“…For example, under adverse listening conditions, increasing functional connectivity between frontal and parietal regions has been shown to facilitate speech comprehension (Obleser et al, 2007). However, these connectivity studies were examined in one speech condition such as during auditory-only speech perception (Schall and von Kriegstein, 2014), silent speech reading (Chu et al, 2013), or acoustic degradation condition (Obleser et al, 2007). These task choices made unclear how modality-specific brain regions and integration regions interact.…”
mentioning
confidence: 99%