Speech production is one of the most fundamental activities of humans. A core cognitive operation involved in this skill is the retrieval of words from long-term memory, that is, from the mental lexicon. In this article, we establish the time course of lexical access by recording the brain electrical activity of participants while they named pictures aloud. By manipulating the ordinal position of pictures belonging to the same semantic categories, the cumulative semantic interference effect, we were able to measure the exact time at which lexical access takes place. We found significant correlations between naming latencies, ordinal position of pictures, and event-related potential mean amplitudes starting 200 ms after picture presentation and lasting for 180 ms. The study reveals that the brain engages extremely fast in the retrieval of words one wishes to utter and offers a clear time frame of how long it takes for the competitive process of activating and selecting words in the course of speech to be resolved.electrophysiology ͉ lexical access ͉ speech production W ord selection is a crucial step in speech production. Considering that the average lexicon contains Ϸ50,000 lexical entries and that an average speaker utters approximately three words per second, the process of lexical retrieval needs to proceed at high speed and with great accuracy. Failures of this process result in speech errors or anomia, which limit communication, as acutely demonstrated in production aphasia, for instance. Although our understanding of how speakers retrieve words from the lexicon has considerably increased in recent years (1-4), the neural implementation of this process remains poorly understood. In particular, insights regarding the time course of word retrieval in speech production are sparse, and most of the chronometric evidence available is derived from event-related potential (ERP) studies relying on button-press responses rather than in actual overt speech production (5-9). This strategy was adopted because EEG is highly susceptible to mouth movements that could possibly mask the cognitive components of interest. However, at least one EEG study and several MEG studies have shown that artifact-free brain responses can be measured up to at least 400 ms after picture onset (10-13), and a few recent ERP studies demonstrated that classical ERP components can be replicated during overt picture naming (14-17).Although these latter studies reveal the validity of ERPs for studying overt naming, they have not directly investigated the issue of the time course of lexical selection, but rather other aspects of word production (e.g., morphological processing, bilingual language control, etc.). It is the goal of the present study to identify the time course of word selection during overt naming, capitalizing on the fine temporal resolution of ERPs. In this study, we directly measure the time course of word retrieval during overt naming. Such temporal information is invaluable for understanding brain mechanisms underlying speech pro...
Establishing when and how the human brain differentiates between object categories is key to understanding visual cognition. Event-related potential (ERP) investigations have led to the consensus that faces selectively elicit a negative wave peaking 170 ms after presentation, the 'N170'. In such experiments, however, faces are nearly always presented from a full front view, whereas other stimuli are more perceptually variable, leading to uncontrolled interstimulus perceptual variance (ISPV). Here, we compared ERPs elicited by faces, cars and butterflies while--for the first time--controlling ISPV (low or high). Surprisingly, the N170 was sensitive, not to object category, but to ISPV. In addition, we found category effects independent of ISPV 70 ms earlier than has been generally reported. These results demonstrate early ERP category effects in the visual domain, call into question the face selectivity of the N170 and establish ISPV as a critical factor to control in experiments relying on multitrial averaging.
Why is it more difficult to comprehend a 2nd (L2) than a 1st language (L1)? In the present article we investigate whether difficulties during L2 sentence comprehension come from differences in the way L1 and L2 speakers anticipate upcoming words. We recorded the brain activity (event-related potentials) of Spanish monolinguals, French-Spanish late bilinguals, and Spanish-Catalan early bilinguals while reading sentences in Spanish. We manipulated the ending of highly constrained sentences so that the critical noun was either expected or not. The expected and unexpected nouns were of different gender so that we could observe potential anticipation effects already on the article. In line with previous studies, a modulation of the N400 effect was observed on the article and the noun, followed by an anterior positivity on the noun. Importantly, this pattern was found in all 3 groups, suggesting that, at least when their 2 languages are closely related, bilinguals are able to anticipate upcoming words in a similar manner as monolinguals.
This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs) were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker.
A crucial step for understanding how lexical knowledge is represented is to describe the relative similarity of lexical items, and how it influences language processing. Previous studies of the effects of form similarity on word production have reported conflicting results, notably within and across languages. The aim of the present study was to clarify this empirical issue to provide specific constraints for theoretical models of language production. We investigated the role of phonological neighborhood density in a large-scale picture naming experiment using finegrained statistical models. The results showed that increasing phonological neighborhood density has a detrimental effect on naming latencies, and re-analyses of independently obtained data sets provide supplementary evidence for this effect. Finally, we reviewed a large body of evidence concerning phonological neighborhood density effects in word production, and discussed the occurrence of facilitatory and inhibitory effects in accuracy measures. The overall pattern shows that phonological neighborhood generates two opposite forces, one facilitatory and one inhibitory. In cases where speech production is disrupted (e.g. certain aphasic symptoms), the facilitatory component may emerge, but inhibitory processes dominate in efficient naming by healthy speakers. These findings are difficult to accommodate in terms of monitoring processes, but can be explained within interactive activation accounts combining phonological facilitation and lexical competition. 2Reconciling phonological neighborhood effects in speech production through single trial analysis Native speakers of a language know a myriad of different words. This so-called mental lexicon is often described as an interconnected network in which representation distance may depend on meaning or form similarities among the words. One crucial step for understanding this network is to describe which kind of similarity influences language processing and how it modulates performance. In this context, the current research is concerned with the role of phonological similarity in word retrieval and speech production. This issue has been addressed in earlier theoretical work (e.g. Chen & Mirman, 2012;Dell & Gordon, 2003), but the empirical evidence on which this work is grounded remains controversial. Our goal in this article is to clarify the empirical facts regarding form similarity effects in speech production, and to integrate them within a single account. Clearing the empirical constraints will allow further refinement of theoretical models to advance our understanding of the cognitive processes underlying speech production.An approximation of how similar or interconnected a word is within the lexical network can be obtained by computing its phonological neighborhood density (PhND). PhND refers to the number of words that can be formed from a given word by substituting, adding or deleting one phoneme (Luce, 1986). For example, the word "bat" sounds similar to many other words (e.g., "cat", "fat",...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.