2015
DOI: 10.1007/s00221-015-4324-7
|View full text |Cite|
|
Sign up to set email alerts
|

Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers

Abstract: Humans combine the visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing multisensory speech perception is the McGurk effect: when presented with particular pairings of incongruent auditory and visual speech syllables (e.g., the auditory speech sounds for “ba” dubbed onto the visual mouth movements for “ga”) individuals perceive a third syllable, distinct from the auditory and visual components. Chinese and American cultures differ i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
40
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 40 publications
(45 citation statements)
references
References 34 publications
3
40
2
Order By: Relevance
“…(For examples of papers that begin this way, see Altieri et al, 2011; Anderson et al, 2009; Colin et al, 2005; Grant et al, 1998; Magnotti et al, 2015; Massaro et al, 1993; Nahorna et al, 2012; Norrix et al, 2007; Ronquest et al, 2010; Rosenblum et al, 1997; Ross et al, 2007; Saalasti et al, 2011; Sams et al, 1998; Sekiyama, 1997; Sekiyama et al, 2003; Strand et al, 2014; van Wassenhove et al, 2007.) Both effects have been replicated many times and unquestionably show the influence of visual input on speech perception.…”
Section: Introductionmentioning
confidence: 99%
“…(For examples of papers that begin this way, see Altieri et al, 2011; Anderson et al, 2009; Colin et al, 2005; Grant et al, 1998; Magnotti et al, 2015; Massaro et al, 1993; Nahorna et al, 2012; Norrix et al, 2007; Ronquest et al, 2010; Rosenblum et al, 1997; Ross et al, 2007; Saalasti et al, 2011; Sams et al, 1998; Sekiyama, 1997; Sekiyama et al, 2003; Strand et al, 2014; van Wassenhove et al, 2007.) Both effects have been replicated many times and unquestionably show the influence of visual input on speech perception.…”
Section: Introductionmentioning
confidence: 99%
“…Pitch accent languages, such as Japanese, also have some tonal properties (high and low pitch), but to a much smaller extent than Mandarin Chinese. Scholars such as Sekiyama (1997) and Magnotti et al (2015) have explored the McGurk effect in native speakers of Mandarin Chinese (as described above), although in all cases they targeted the McGurk effect at the segmental level of speech (mainly consonant perception). Consonant perception is fairly susceptive to visual information, because place of articulation is a major determinant (i.e., lip-read), and that is relatively more visually salient, while the present study extends the auditory-visual integration to the suprasegmental level, that is, the four Mandarin Chinese tones.…”
Section: Introductionmentioning
confidence: 99%
“…Stimuli and expected fusions were determined from Magnotti et al (2015) and Strand et al (2014) . All stimuli were created in iMovie (version 10.1) by aligning the consonant bursts of the original AV track with the to-be-spliced auditory tracks, then deleting the original auditory track from the video recording (e.g., to create the McGurk stimulus A b V g , we took the A g V g stimulus, matched the audio track in time with the audio recording of /bɑ/, then deleted the auditory /gɑ/).…”
Section: Methodsmentioning
confidence: 99%