Whether the native language of bilingual individuals is active during second-language comprehension is the subject of lively debate. Studies of bilingualism have often used a mix of first-and second-language words, thereby creating an artificial ''dual-language'' context. Here, using event-related brain potentials, we demonstrate implicit access to the first language when bilinguals read words exclusively in their second language. Chinese-English bilinguals were required to decide whether English words presented in pairs were related in meaning or not; they were unaware of the fact that half of the words concealed a character repetition when translated into Chinese. Whereas the hidden factor failed to affect behavioral performance, it significantly modulated brain potentials in the expected direction, establishing that English words were automatically and unconsciously translated into Chinese. Critically, the same modulation was found in Chinese monolinguals reading the same words in Chinese, i.e., when Chinese character repetition was evident. Finally, we replicated this pattern of results in the auditory modality by using a listening comprehension task. These findings demonstrate that native-language activation is an unconscious correlate of second-language comprehension.bilingualism ͉ event-related potentials ͉ language access ͉ semantic priming ͉ unconscious priming S ome studies in cognitive neuroscience have suggested that fluent bilinguals can effectively inhibit their first language when accessing word meaning in their second language based on the word form (1). However, this finding conflicts with functional neuroimaging data showing overlapping cortical representation of the two languages (2, 3). A number of psycholinguistic experiments have also suggested that the two languages mastered by one individual are constantly coactivated and interactive (4-7), whereas others have provided evidence for language independence (8, 9). It therefore remains an open question whether or not bilingual individuals can effectively suppress all interference from their first language when processing their second language (10).Previous studies have made extensive use of cross-language priming (6, 9, 11) or overt translation tasks (12, 13) to compare native-and second-language activation in bilinguals. For example, reaction time is reduced in French-English bilinguals when the English word money is presented after the French word coin ''corner'' relative to when it is presented after feuille ''leaf.'' However, mixing stimuli from two languages creates an artificial context that necessarily biases the output of behavioral tests toward a bilingual or ''dual-language'' activation pattern (14). For that matter, translation tasks are even more biased because they require conscious access to both languages. In fact, any experiment mixing stimuli from two languages or using interlingual homographs is likely to activate both languages, even if native-language activation is not automatic during everyday second-language comprehensio...
The present study establishes an electrophysiological index of lexical access in speech production by exploring the locus of the frequency and cognate effects during overt naming. We conducted 2 event-related potential (ERP) studies with 16 Spanish-Catalan bilinguals performing a picture naming task in Spanish (L1) and 16 Catalan-Spanish bilinguals performing a picture naming task in Spanish (L2). Behavioral results showed a clear frequency effect and an interaction between frequency and cognate status. The ERP elicited during the production of high-frequency words diverged from the low-frequency ERP between 150 and 200 ms post-target presentation and kept diverging until voice onset. The same results were obtained when comparing cognate and noncognate conditions. Positive correlations were observed between naming latencies and mean amplitude of the P2 component following the divergence, for both the lexical frequency and the cognate effects. We conclude that lexical access during picture naming begins approximately 180 ms after picture presentation. Furthermore, these results offer direct electrophysiological evidence for an early influence of frequency and cognate status in speech production. The theoretical implications of these findings for models of speech production are discussed.
It is now established that native language affects one's perception of the world. However, it is unknown whether this effect is merely driven by conscious, language-based evaluation of the environment or whether it reflects fundamental differences in perceptual processing between individuals speaking different languages. Using brain potentials, we demonstrate that the existence in Greek of 2 color terms-ghalazio and ble-distinguishing light and dark blue leads to greater and faster perceptual discrimination of these colors in native speakers of Greek than in native speakers of English. The visual mismatch negativity, an index of automatic and preattentive change detection, was similar for blue and green deviant stimuli during a color oddball detection task in English participants, but it was significantly larger for blue than green deviant stimuli in native speakers of Greek. These findings establish an implicit effect of language-specific terminology on human color perception.cognition ͉ cultural differences ͉ event-related potentials ͉ linguistic relativity ͉ visual mismatch negativity
Speech production is one of the most fundamental activities of humans. A core cognitive operation involved in this skill is the retrieval of words from long-term memory, that is, from the mental lexicon. In this article, we establish the time course of lexical access by recording the brain electrical activity of participants while they named pictures aloud. By manipulating the ordinal position of pictures belonging to the same semantic categories, the cumulative semantic interference effect, we were able to measure the exact time at which lexical access takes place. We found significant correlations between naming latencies, ordinal position of pictures, and event-related potential mean amplitudes starting 200 ms after picture presentation and lasting for 180 ms. The study reveals that the brain engages extremely fast in the retrieval of words one wishes to utter and offers a clear time frame of how long it takes for the competitive process of activating and selecting words in the course of speech to be resolved.electrophysiology ͉ lexical access ͉ speech production W ord selection is a crucial step in speech production. Considering that the average lexicon contains Ϸ50,000 lexical entries and that an average speaker utters approximately three words per second, the process of lexical retrieval needs to proceed at high speed and with great accuracy. Failures of this process result in speech errors or anomia, which limit communication, as acutely demonstrated in production aphasia, for instance. Although our understanding of how speakers retrieve words from the lexicon has considerably increased in recent years (1-4), the neural implementation of this process remains poorly understood. In particular, insights regarding the time course of word retrieval in speech production are sparse, and most of the chronometric evidence available is derived from event-related potential (ERP) studies relying on button-press responses rather than in actual overt speech production (5-9). This strategy was adopted because EEG is highly susceptible to mouth movements that could possibly mask the cognitive components of interest. However, at least one EEG study and several MEG studies have shown that artifact-free brain responses can be measured up to at least 400 ms after picture onset (10-13), and a few recent ERP studies demonstrated that classical ERP components can be replicated during overt picture naming (14-17).Although these latter studies reveal the validity of ERPs for studying overt naming, they have not directly investigated the issue of the time course of lexical selection, but rather other aspects of word production (e.g., morphological processing, bilingual language control, etc.). It is the goal of the present study to identify the time course of word selection during overt naming, capitalizing on the fine temporal resolution of ERPs. In this study, we directly measure the time course of word retrieval during overt naming. Such temporal information is invaluable for understanding brain mechanisms underlying speech pro...
Functional neuroimaging methods have reached maturity. It is now possible to start to build the foundations of a physiology of language. The remarkable number of neuroimaging studies performed so far illustrates the potential of this approach, which complements the classical knowledge accumulated on aphasia. Here we attempt to characterize the impact of the functional neuroimaging revolution on our understanding of language. Although today considered as neuroimaging techniques, we refer less to electroencephalography and magnetoencephalography studies than to positron emission tomography and functional magnetic resonance imaging studies, which deal more directly with the question of localization and functional neuroanatomy. This review is structured in three parts. 1) Because of their rapid evolution, we address technical and methodological issues to provide an overview of current procedures and sketch out future perspectives. 2) We review a set of significant results acquired in normal adults (the core of functional imaging studies) to provide an overview of language mechanisms in the “standard” brain. Single-word processing is considered in relation to input modalities (visual and auditory input), output modalities (speech and written output), and the involvement of “central” semantic processes before sentence processing and nonstandard language (illiteracy, multilingualism, and sensory deficits) are addressed. 3) We address the influence of plasticity on physiological functions in relation to its main contexts of appearance, i.e., development and brain lesions, to show how functional imaging can allow fine-grained approaches to adaptation, the fundamental property of the brain. In closing, we consider future developments for language research using functional imaging.
Bilingual individuals have been shown to access their native language while reading in or listening to their other language. However, it is unknown what type of mental representation (e.g., sound or spelling) they retrieve. Here, using event-related brain potentials, we demonstrate unconscious access to the sound form of Chinese words when advanced Chinese-English bilinguals read or listen to English words. Participants were asked to decide whether or not English words presented in pairs were related in meaning; they were unaware of the fact that some of the unrelated word pairs concealed either a sound or a spelling repetition in their Chinese translations. Whereas spelling repetition in Chinese translations had no effect, concealed sound repetition significantly modulated event-related brain potentials. These results suggest that processing second language activates the sound, but not the spelling, of native language translations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations –citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.