Purpose This study investigated how listeners’ native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. Method Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification task on nonce disyllabic words with fully crossed combinations of each of the 4 cues in both syllables. Results The results revealed that although the vowel quality cue was the strongest cue for all groups of listeners, pitch was the second strongest cue for the English and the Mandarin listeners but was virtually disregarded by the Russian listeners. Duration and intensity cues were used by the Russian listeners to a significantly greater extent compared with the English and Mandarin participants. Compared with when cues were noncontrastive across syllables, cues were stronger when they were in the iambic contour than when they were in the trochaic contour. Conclusions Although both English and Russian are stress languages and Mandarin is a tonal language, stress perception performance of the Mandarin listeners but not of the Russian listeners is more similar to that of the native English listeners, both in terms of weighting of the acoustic cues and the cues’ relative strength in different word positions. The findings suggest that tuning of second-language prosodic perceptions is not entirely predictable by prosodic similarities across languages.
The study uses an elicited imitation (EI) task to examine the effect of the native language on the use of the English nongeneric definite article by highly proficient first-language (L1) Spanish and Russian speakers and to test the hierarchy of article difficulty first proposed by Liu and Gleason (2002). Our findings suggest that there is a clear influence of L1 on participants’ reproduction of the second-language (L2) definite article in nongeneric contexts, but that various contexts present different levels of difficulty for the two L1 groups. The participants whose L1 is Spanish – a language with an article system – perform at a native-like level of accuracy in the grammatical condition of the test, whereas the participants whose L1 is Russian – a language without articles – demonstrate a tendency to omit definite articles in the same contexts. In the ungrammatical condition, Spanish speakers differ from the native speaker control group in their suppliance of the definite article in conventional and cultural contexts, while Russian participants supply the definite article significantly less than both the Spanish participants and the control group along all article categories. The study offers novel insights into what constitutes article difficulty for L2 learners from different L1s.
The sensorimotor cortex is somatotopically organized to represent the vocal tract articulators such as lips, tongue, larynx, and jaw. How speech and articulatory features are encoded at the subcortical level, however, remains largely unknown. We analyzed LFP recordings from the subthalamic nucleus (STN) and simultaneous electrocorticography recordings from the sensorimotor cortex of 11 human subjects (1 female) with Parkinson's disease during implantation of deep-brain stimulation (DBS) electrodes while they read aloud three-phoneme words. The initial phonemes involved either articulation primarily with the tongue (coronal consonants) or the lips (labial consonants). We observed significant increases in high-gamma (60 -150 Hz) power in both the STN and the sensorimotor cortex that began before speech onset and persisted for the duration of speech articulation. As expected from previous reports, in the sensorimotor cortex, the primary articulators involved in the production of the initial consonants were topographically represented by highgamma activity. We found that STN high-gamma activity also demonstrated specificity for the primary articulator, although no clear topography was observed. In general, subthalamic high-gamma activity varied along the ventral-dorsal trajectory of the electrodes, with greater high-gamma power recorded in the dorsal locations of the STN. Interestingly, the majority of significant articulatordiscriminative activity in the STN occurred before that in sensorimotor cortex. These results demonstrate that articulator-specific speech information is contained within high-gamma activity of the STN, but with different spatial and temporal organization compared with similar information encoded in the sensorimotor cortex.
In order to comprehend speech, listeners have to combine low‐level phonetic information about the incoming auditory signal with higher‐order contextual information to make a lexical selection. This requires stable phonological categories and unambiguous representations of words in the mental lexicon. Unlike native speakers, second language (L2) speakers, who perceive nonnative sounds through the prism of their first language (L1), operate with fuzzy phonological categories, which lead to phonologically ambiguous lexical representations (e.g., the words rock and lock can be confused if phonological representations for /r/ and /l/ are not sufficiently robust). The present study uses the AX discrimination task to establish the degree of sensitivity of L2 listeners to the Russian hard/soft phonological contrast. The same phonological contrasts are then used in the stimuli for the second task—listening comprehension task with word identification—to mark semantic, syntactic, and morphological distinctions in words. The goal of the study is to examine the contributions and relative efficiency of different contextual constraints (semantic, syntactic, and morphological) to the resolution of phonolexical ambiguity in L2 auditory sentence processing. The results suggest that when L2 phonological contrasts present a discriminability problem and create phonolexical ambiguity, L2 listeners rely on morphological constraints for disambiguation of word forms and syntactic constraints for disambiguation of words belonging to different parts of speech to a greater extent than on semantic constraints for disambiguation of nouns in the same form.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.