Acquiring a new language requires individuals to simultaneously and gradually learn linguistic attributes on multiple levels. Here, we investigated how this learning process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded from native Mandarin speakers with varied English proficiency and from native English speakers while they listened to audio-stories in English. We measured the temporal response functions (TRFs) for acoustic, phonemic, phonotactic, and semantic features in individual participants and found a main effect of proficiency on linguistic encoding. This effect of second-language proficiency was particularly prominent on the neural encoding of phonemes, showing stronger encoding of “new” phonemic contrasts (i.e., English contrasts that do not exist in Mandarin) with increasing proficiency. Overall, we found that the nonnative listeners with higher proficiency levels had a linguistic feature representation more similar to that of native listeners, which enabled the accurate decoding of language proficiency. This result advances our understanding of the cortical processing of linguistic information in second-language learners and provides an objective measure of language proficiency.
Recent research demonstrates that prototypical negative concord (NC) languages allow double negation (DN) (Espinal & Prieto 2011; Prieto et al. 2013; Déprez et al. 2015; Espinal et al. 2016). In NC, two or more syntactic negations yield a single semantic one (e.g., the ‘I ate nothing’ reading of “I didn’t eat nothing”), and in DN each negation contributes to the semantics (e.g. ‘It is not the case that I ate nothing’). That NC and DN have been shown to coexist calls into question the hypothesis that grammars are either NC or DN (Zeijlstra 2004), and supports micro-parametric views of these phenomena (Déprez 2011; Blanchette 2017). Our study informs this debate with new experimental data from American English. We explore the role of syntax and speaker intent in shaping the perception and interpretation of English sentences with two negatives. Our results demonstrate that, like in prototypical NC languages (Espinal et al. 2016), English speakers reliably exploit syntactic, pragmatic, and acoustic cues to in selecting an NC or a DN interpretation.
When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n )? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal‐on‐Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n , the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye‐movements, and time‐locked the analysis of fixation‐related potentials (FRPs) to fixation of word n . We found robust PoF repetition effects on gaze durations on word n , and also on the initial landing position on word n . Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post‐fixation of word n . Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n , plus the spatial integration of orthographic information extracted from these two words in parallel.
Word count (excluding abstract, title page, references and methods): 6094. Acknowledgements:The authors would like to thank Michael Broderick for his help with the semantic dissimilarity analysis. The authors also thank Adam Soussana and Ghislain de Labbey for their help with a pilot version of this experiment. AbstractAcquiring a new language requires a simultaneous and gradual learning of multiple levels of linguistic attributes. Here, we investigated how this process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded during English story listening from native Mandarin speakers with varied English proficiency and from native English speakers. The neural encoding of acoustic, phonemic, phonotactic, and semantic features was measured in individual participants. We found that linguistic feature representation in nonnative listeners progressively converged to that of native listeners with proficiency, which enabled accurate decoding of language proficiency. This effect of second-language proficiency was particularly prominent on the neural encoding of phonemes, showing stronger encoding of "new" phonemic contrasts (i.e. English contrasts that do not exist in Mandarin) with increasing proficiency. This detailed view advances our understanding of the cortical processing of linguistic information in second-language learners and provides an objective measure of language proficiency. be affected differently by proficiency, with some of them becoming more native-like than others for proficient L2 users. Part of the evidence comes from electro-and magneto-encephalography (EEG and MEG respectively) research, which showed the effect of proficiency at the levels of phonemes 13 , syntax 14,15 , and semantics 16 . These studies measured the changes in well-known event-related potentials, such as the MMN, N400, and P600. These approaches, however, use unnatural speech stimuli (e.g., isolated syllables or violative speech sentences) which do not fully and realistically activate the specialized speech cortex [17][18][19] . In addition, these approaches consider various levels of speech perception independently and in isolation. Language learning, on the other hand, involves simultaneous acquisition of novel phonetic contrasts 20,21 , new syllabic structures (phonotactics) 22 , and new words. A more complete view of the neural basis of language learning therefore requires a joint study of multiple levels of the linguistic hierarchy to advance our understanding of L2 perception by informing us on the precise effect of proficiency on the cortical processing strategies that underpin sound and language perception 23-25 .Previous effort in using naturalistic speech stimuli to study language proficiency showed a modulation of EEG phase-synchronization during naturalistic speech listening both at sub-cortical (FFR 26,27 ) and cortical levels (gamma EEG synchrony 28,29 ). Specifically, stronger synchron...
When a sequence of written words is briefly presented and participants are asked to identify just one word at a post-cued location, then word identification accuracy is higher when the word is presented in a grammatically correct sequence compared with an ungrammatical sequence. This sentence superiority effect has been reported in several behavioral studies and two EEG investigations. Taken together, the results of these studies support the hypothesis that the sentence superiority effect is primarily driven by rapid access to a sentence-level representation via partial word identification processes that operate in parallel over several words. Here we used MEG to examine the neural structures involved in this early stage of written sentence processing, and to further specify the timing of the different processes involved. Source activities over time showed grammatical vs. ungrammatical differences first in the left inferior frontal gyrus (IFG: 325-400 ms), then the left anterior temporal lobe (ATL: 475-525 ms), and finally in both left IFG and left posterior superior temporal gyrus (pSTG: 550-600 ms). We interpret the early IFG activity as reflecting the rapid bottom-up activation of sentence-level representations, including syntax, enabled by partly parallel word processing. Subsequent activity in ATL and pSTG is thought to reflect the constraints imposed by such sentence-level representations on on-going word-based semantic activation (ATL), and the subsequent development of a more detailed sentence-level representation (pSTG). These results provide further support for a cascaded interactive-activation account of sentence reading.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.