The existence of the Language Familiarity Effect (LFE), where talkers of a familiar language are easier to identify than talkers of an unfamiliar language, is well‐documented and uncontroversial. However, a closely related phenomenon known as the Other Accent Effect (OAE), where accented talkers are more difficult to recognize, is less well understood. There are several possible explanations for why the OAE exists, but to date, little data exist to adjudicate differences between them. Here, we begin to address this issue by directly comparing listeners’ recognition of talkers who speak in different types of accents, and by examining both the LFE and OAE in the same set of listeners. Specifically, Canadian English listeners were tested on their ability to recognize talkers within four types of voice line‐ups: Canadian English talkers, Australian English talkers, Mandarin‐accented English talkers, and Mandarin talkers. We predicted that the OAE would be present for talkers of Mandarin‐accented English but not for talkers of Australian English—which is precisely what we observed. We also observed a disconnect between listeners’ confidence and performance across different types of accents; that is, listeners performed equally poorly with Mandarin and Mandarin‐accented talkers, but they were more confident with their performance with the latter group of talkers. The present findings set the stage for further investigation into the nature of the OAE by exploring a range of potential explanations for the effect, and introducing important implications for forensic scientists’ evaluation of ear witness testimony.
Child speech deviates from adult speech in predictable ways. Are listeners who routinely interact with children implicitly aware of these systematic deviations, and thereby better at understanding children? Or do idiosyncratic differences in how children pronounce words overwhelm these systematic deviations? In Experiment 1, we use a speech-in-noise transcription task to test who "speaks kid" among four listener groups: undergraduates (n = 48), mothers of young children (n = 48), early childhood educators (n = 48), and speech-language pathologists (SLPs; n = 48). All listeners transcribed speech by typically developing children and adults. In Experiment 2, we use a similar task to test an additional group of mothers (n = 50) on how intelligible they found their own child versus another child. Contrary to previous claims, we find no evidence for an experience-based general child speech intelligibility advantage. However, we do find that mothers understand their own child best. We also observe a general task advantage by SLPs. Our findings demonstrate that routine (and even extensive) exposure to children may not make all children more intelligible, but that it may instead make particular children one has experience with more intelligible. Public Significance StatementThe goal of the study is to determine who (if anyone) "speaks kid." In support of existing evidence, we find that experience with a child makes it easier to understand words produced by that child specifically; however, this frequent experience does not predict a general child speech processing advantage. We did, however, find that speech-language pathologists had a general advantage in understanding words spoken by both adults and children. Our findings clarify existing claims in the literature and provide key insights into the nature of human speech processing in general.
Nonnative accents are commonplace, but why? Ample research shows that perceptual representations of second-language speakers are shaped by their first language. But is production also affected? If perceptual representations perfectly control motor production, then second-language speakers should understand their own speech accurately. To test this, we recorded 48 native Mandarin speakers labeling pictures in English. We then played back their own recorded productions (e.g., “lock”) as they chose one of four pictures (lock, log, shape, and ship). They also heard a paired native English speaker. Words contained contrasts challenging for Mandarin speakers, principally coda voicing (lock, log) and similar-vowel (shape, ship) pairs. Listeners achieved 89% accuracy on both their own productions and native speakers,’ suggesting good matching between perception and production. However, errors were unevenly distributed: Mandarin speakers heard their own voiced codas (log) as voiceless (lock) more often than the reverse (10% vs. 5%, p = 0.002). This mirrors a similar but larger voiceless bias in native-English listeners hearing accented stimuli, suggesting that Mandarin speakers’ coda voicing perception is more nativelike than their production. Ongoing work attempts to differentiate interlanguage intelligibility effects from learning of idiosyncratic speech patterns, and we are exploring which acoustic features predict recognition.
In a dual-task paradigm, native English-speaking listeners (N=55) made judgments about visually presented digits while simultaneously listening to speech that varied by talkers-per-accent (between-subjects: 1 vs 3) and accent type (within-subjects: native[Canadian]/regional[Australian]/nonnative[Mandarin]). Adaptation occurred within only six exposures but was only observed for nonnative and native-accented speech (p < 0.05) and was also dependent on talkers-per-accent, indicating that transient speech processing demands are uniquely impacted by the interactions between accent type and talker variability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.