In high-stakes oral proficiency testing as well as in everyday encounters, accent is the most salient aspect of nonnative speech. Prior studies of English language learners' (ELLs') pronunciation have focused on single parameters of English, such as vowel duration, fundamental frequency as related to intonation, or temporal measures of speech production. The present study addresses a constellation of suprasegmental characteristics of nonnative speakers of accented English, combining indices of speech rate, pause, and intonation. It examines relations between those acoustic measures of accentedness and listeners' impressions of second-language oral proficiency. Twenty-six speech samples elicited from iBT TOEFL ® examinees were analyzed using a KayPENTAX Computerized Speech Laboratory. Monolingual U.S. undergraduates (n = 188) judged the speakers' oral proficiency and comprehensibility. A multiple regression analysis revealed the individual and joint predictiveness of each of the suprasegmental measures. The innovative aspect of this study lies in the fact that the multiple features of accentedness were measured via instrumentation rather than being rated by judges who may, themselves, be subject to rating biases. The suprasegmental measures collectively accounted for 50% of the variance in oral proficiency and comprehensibility ratings, even without taking into consideration other aspects of oral performance or of rater predilections.THE CONSTRUCTS OF COMPREHENSIBILITY and accentedness relate in complex ways to native speaker (NS) judgments of English language learners' (ELLs') oral proficiency. No clear isomorphism has been established between degree of accentedness and comprehensibility. Speakers who succeed in reducing the degree of "foreignness" in their accents (based on expert observer judgments) may still be heard as incomprehensible by lay listeners (Munro & Derwing, 1995).
The linguistic stereotyping hypothesis holds that even brief samples of speech varieties associated with low-prestige groups can cue negative attributions regarding individual speakers. The converse phenomenon is reverse linguistic stereotyping (RLS). In RLS, attributions of a speaker's group membership trigger distorted evaluations of that person's speech. The present study established a procedure for ascertaining a proclivity to RLS for individual listeners. In addition to RLS, variables reflecting degree of multicultural involvement (e.g., proportion of friends who are nonnative speakers, amount of language study) predicted speech evaluations. Although the RLS measurement procedure outlined here requires more demanding administration than mere paper-and-pencil self-reports, it has the advantage of reflecting authentic RLS processes. Measuring individuals' RLS levels can help screen teachers, job interviewers, immigration officials, and others who are called on to make judgments about the oral proficiency of speakers of nonprestige language varieties.
This study examines the effect of incorporating a variety of international English accents into a simulated TOEFL listening comprehension test in growing recognition of internationalization of language teaching and learning in the field of TESOL. Although some highstakes English proficiency exams have begun incorporating speech samples produced by speakers from a range of inner circle Englishspeaking backgrounds (e.g., Britain, the United States, Australia), the inclusion of samples produced by speakers of outer and expanding circle English varieties (e.g., India, Nigeria, Mexico, South Korea) has been largely avoided. For this study the researchers recruited speakers from six distinct English varieties to produce speech samples for a mock TOEFL iBT listening exam. Listeners who spoke with the same six international English accents were then recruited to take the resulting tests. Results indicate that when accented English is highly comprehensible, listening test scores for stimuli based on high-proficiency speakers of outer and expanding circle varieties of English are not significantly lower than they are in response to stimuli based on inner circle varieties of English. With respect to a shared first language effect on test scores when test materials are spoken in the test taker's own accent, results are complex but inconclusive.
This study compared five research-based intelligibility measures as they were applied to six varieties of English. The objective was to determine which approach to measuring intelligibility would be most reliable for predicting listener comprehension, as measured through a listening comprehension test similar to the Test of English as a Foreign Language. The speakers included 18 English users representing six distinct varieties. These speakers' speech was evaluated by 60 listeners, users of the same English varieties who completed the listening comprehension test as well as five intelligibility tasks, all recorded by the speakers. The five measures of intelligibility included responses to true/false statements, scalar ratings of speech, perception of nonsense sentences, perception of filtered sentences, and transcription of speech; these measures were compared in terms of their relationship to listening comprehension scores using linear mixed-effects models. Results showed that the measure of intelligibility based on listeners' responses to nonsense sentences was the strongest predictor of the listening comprehension scores.
As a result of the fact that judgments of non-native speech are closely tied to social biases, oral proficiency ratings are susceptible to error because of rater background and social attitudes. In the present study we seek first to estimate the variance attributable to rater background and attitudinal variables on novice raters’ assessments of L2 spoken English. Second, we examine the effects of minimal training in reducing the potency of those trait-irrelevant rater factors. Accordingly, we examined the relative impact of rater differences on TOEFL iBT® speaking scores. Eighty-two untrained raters judged 112 speech samples produced by TOEFL® examinees. Findings revealed that approximately 20% of untrained raters’ score variance was, in part, a result of their background and attitudinal factors. The strongest predictor was the raters’ own native speaker status. However, minimal online training dramatically reduced the impact of rater background and attitudinal variables for a subsample of high- and low-severity raters. Implications suggest that brief and user-friendly rater-training sessions offer the promise of mitigating rater bias, at least in the short run. This procedure can be adopted in assessment and other related fields of applied linguistics.
Intelligibility problems between native speakers (NSs) and nonnative speakers (NNSs) of English are often attributed to some perceived inadequacy of the NNSs. This emphasis on the NNSs’ role in successful communication is highly problematic, given that intelligibility is a negotiated process between speaker and listener. In some cases, NSs have negative attitudes toward NNSs that impair their willingness to communicate with NNSs and to acknowledge proficient NNS speech. Thus, NS attitudes are also important factors in the success of NS–NNS communication. This article demonstrates a brief intervention that reduces negative language attitudes and thus promotes communication between NS undergraduates and NNSs who are international teaching assistants (ITAs). Two studies are reported. In both, undergraduates engaged in cooperative problem‐solving exercises with ITAs. Results show that undergraduates exposed to structured intergroup contact subsequently rated ITAs higher in instructional competence and comprehensibility. Future applications of contact theory promise to improve NSs’ comprehension of nonnative English and to cultivate their global citizenship.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.