Previous studies have shown that when speakers of European languages are asked to turn nonwords into words by altering either a vowel or consonant, they tend to treat vowels as more mutable than consonants. These results inspired the universal vowel mutability hypothesis: listeners learn to cope with vowel variability because vowel information constrains lexical selection less tightly and allows for more potential candidates than does consonant information. The present study extends the word reconstruction paradigm to Mandarin Chinese--a Sino-Tibetan language, which makes use of lexically contrastive tone. Native speakers listened to word-like nonwords (e.g., su3) and were asked to change them into words by manipulating a single consonant (e.g., tu3), vowel (e.g., si3), or tone (e.g., su4). Additionally, items were presented in a fourth condition in which participants could change any part. The participants' reaction times and responses were recorded. Results revealed that participants responded faster and more accurately in both the free response and the tonal change conditions. Unlike previous reconstruction studies on European languages, where vowels were changed faster and more often than consonants, these results demonstrate that, in Mandarin, changes to vowels and consonants were both overshadowed by changes to tone, which was the preferred modification to the stimulus nonwords, while changes to vowels were the slowest and least accurate. Our findings show that the universal vowel mutability hypothesis is not consistent with a tonal language, that Mandarin tonal information is lower-priority than consonants and vowels and that vowel information most tightly constrains Mandarin lexical access.
This study investigated how adult second language (L2) learners of Mandarin Chinese use knowledge of phonological and lexical statistical regularities when acoustic information is insufficient for word recognition. A gating task was used to test intermediate L2 learners at two time points across a semester of classroom learning. Native Mandarin speakers (tested once) served as a control group. Mixed-effects modeling revealed that upon hearing truncated speech, L2 learners, like native speakers, identified high token frequency syllable-tone combinations more accurately than low token frequency syllable-tone combinations. Error analysis of correct syllable/incorrect tone responses revealed that native speakers made specific probability-based errors. L2 learners primarily demonstrated more acoustic-based errors but exhibited a trend toward greater probability-based errors during the second test. These findings are interpreted in light of L2 speech learning models that emphasize a statistical learning mechanism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.