The processing of spoken language has been attributed to areas in the superior temporal lobe, where speech stimuli elicit the greatest activation. However, neurobiological and psycholinguistic models have long postulated that knowledge about the articulatory features of individual phonemes has an important role in their perception and in speech comprehension. To probe the possible involvement of specific motor circuits in the speech-perception process, we used event-related functional MRI and presented experimental subjects with spoken syllables, including [p] and [t] sounds, which are produced by movements of the lips or tongue, respectively. Physically similar nonlinguistic signal-correlated noise patterns were used as control stimuli. In localizer experiments, subjects had to silently articulate the same syllables and, in a second task, move their lips or tongue. Speech perception most strongly activated superior temporal cortex. Crucially, however, distinct motor regions in the precentral gyrus sparked by articulatory movements of the lips and tongue were also differentially activated in a somatotopic manner when subjects listened to the lip-or tongue-related phonemes. This sound-related somatotopic activation in precentral gyrus shows that, during speech perception, specific motor circuits are recruited that reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroimaging support for specific links between the phonological mechanisms for speech perception and production.cell assembly ͉ functional MRI ͉ perception-action cycle ͉ mirror neurons ͉ phonetic distinctive featue N eurological theories of language have a long-standing tradition of distinguishing specialized modular centers for speech perception and speech production in left superior temporal and inferior frontal lobes, respectively (1-3). Such separate speech-production and -perception modules are consistent with a number of neuroimaging studies, especially the observations that frontal circuits become most strongly active during speech production and that speech input primarily activates the left superior temporal gyrus and sulcus (4-6). Superior temporal speech-perception mechanisms in humans may be situated in areas homologous to the auditory belt and parabelt areas in monkeys (5,7,8). In macaca, this region includes neurons specialized for species-specific calls (9, 10). Therefore, it appeared to be reasonable to postulate a speech-perception module confined to temporal cortex specifically processing acoustic information that is immanent to speech.In contrast to this view, neurobiological models have long claimed that speech perception is connected to production mechanisms (11-16). Similar views have been proposed in psycholinguistics. For example, the direct realist theory of speech perception (17, 18) postulates a link between motor and perceptual representations of speech. According to the motor theory of Liberman et al. (19,20), speech perception requires access to phoneme representations that are c...
The brain basis of action words may be neuron ensembles binding language- and action-related information that are dispersed over both language- and action-related cortical areas. This predicts fast spreading of neuronal activity from language areas to specific sensorimotor areas when action words semantically related to different parts of the body are being perceived. To test this, fast neurophysiological imaging was applied to reveal spatiotemporal activity patterns elicited by words with different action-related meaning. Spoken words referring to actions involving the face or leg were presented while subjects engaged in a distraction task and their brain activity was recorded using high-density magnetoencephalography. Shortly after the words could be recognized as unique lexical items, objective source localization using minimum norm current estimates revealed activation in superior temporal (130 msec) and inferior frontocentral areas (142-146 msec). Face-word stimuli activated inferior frontocentral areas more strongly than leg words, whereas the reverse was found at superior central sites (170 msec), thus reflecting the cortical somatotopy of motor actions signified by the words. Significant correlations were found between local source strengths in the frontocentral cortex calculated for all participants and their semantic ratings of the stimulus words, thus further establishing a close relationship between word meaning access and neurophysiology. These results show that meaning access in action word recognition is an early automatic process ref lected by spatiotemporal signatures of word-evoked activity. Word-related distributed neuronal assemblies with specific cortical topographies can explain the observed spatiotemporal dynamics reflecting word meaning access.
How long does it take the human mind to grasp the idea when hearing or reading a sentence? Neurophysiological methods looking directly at the time course of brain activity indexes of comprehension are critical for finding the answer to this question. As the dominant cognitive approaches, models of serial/cascaded and parallel processing, make conflicting predictions on the time course of psycholinguistic information access, they can be tested using neurophysiological brain activation recorded in MEG and EEG experiments. Seriality and cascading of lexical, semantic and syntactic processes receives support from late (latency ∼1/2 s) sequential neurophysiological responses, especially N400 and P600. However, parallelism is substantiated by early near-simultaneous brain indexes of a range of psycholinguistic processes, up to the level of semantic access and context integration, emerging already 100–250 ms after critical stimulus information is present. Crucially, however, there are reliable latency differences of 20–50 ms between early cortical area activations reflecting lexical, semantic and syntactic processes, which are left unexplained by current serial and parallel brain models of language. We here offer a mechanistic model grounded in cortical nerve cell circuits that builds upon neuroanatomical and neurophysiological knowledge and explains both near-simultaneous activations and fine-grained delays. A key concept is that of discrete distributed cortical circuits with specific inter-area topographies. The full activation, or ignition, of specifically distributed binding circuits explains the near-simultaneity of early neurophysiological indexes of lexical, syntactic and semantic processing. Activity spreading within circuits determined by between-area conduction delays accounts for comprehension-related regional activation differences in the millisecond range.
There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called "mismatch response"). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an "omission" response). This situation arguably provides a more direct measure of "top-down" predictions in the absence of confounding "bottom-up" input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of "bottom-up" stimuli with the presence versus absence of "top-down" attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward "prediction" connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction.
Rapid information processing in the human brain is vital to survival in a highly dynamic environment. The key tool humans use to exchange information is spoken language, but the exact speed of the neuronal mechanisms underpinning speech comprehension is still unknown. Here we investigate the time course of neuro-lexical processing by analysing neuromagnetic brain activity elicited in response to psycholinguistically and acoustically matched groups of words and pseudowords. We show an ultra-early dissociation in cortical activation elicited by these stimulus types, emerging ~50 ms after acoustic information required for word identification first becomes available. This dissociation is the earliest brain signature of lexical processing of words so far reported, and may help explain the evolutionary advantage of human spoken language.
Mismatch negativity (MMN), an index of experience-dependent memory traces, was used to investigate the processing of action-related words in the human brain. Responses to auditorily presented movement-related English words were recorded in a non-attend odd-ball protocol using a high-density electroencephalographic (EEG) set-up. MMN was calculated using responses to the same words presented as standard and deviant stimuli in different sessions to avoid contamination from phonetic-acoustic differences. The topography of the mismatch negativity to action words revealed an unusual centro-posterior distribution of the responses, suggesting that activity was at least in part generated posterior to usually observed frontal MMNs. Moreover, responses to hand-related word stimulus (pick) had a more widespread lateral distribution, whereas leg-related stimulus (kick) elicited a more focal dorsal negativity. These differences, remarkably reminiscent of sensorimotor cortex topography, were further assessed using distributed source analysis of the EEG signal (L2 minimum-norm current estimates). The source analysis also confirmed differentially distributed activation for the two stimuli. We suggest that these results indicate activation of distributed neuronal assemblies that function as category-specific memory traces for words and may involve sensorimotor cortical structures for encoding action words.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.