Abstract■ There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band. ■
A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time-frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
Abstract■ RTs in conversation, with average gaps of 200 msec and often less, beat standard RTs, despite the complexity of response and the lag in speech production (600 msec or more). This can only be achieved by anticipation of timing and content of turns in conversation, about which little is known. Using EEG and an experimental task with conversational stimuli, we show that estimation of turn durations are based on anticipating the way the turn would be completed. We found a neuronal correlate of turn-end anticipation localized in ACC and inferior parietal lobule, namely a betafrequency desynchronization as early as 1250 msec, before the end of the turn. We suggest that anticipation of the otherʼs utterance leads to accurately timed transitions in everyday conversations. ■
BackgroundAccurately solving the electroencephalography (EEG) forward problem is crucial for precise EEG source analysis. Previous studies have shown that the use of multicompartment head models in combination with the finite element method (FEM) can yield high accuracies both numerically and with regard to the geometrical approximation of the human head. However, the workload for the generation of multicompartment head models has often been too high and the use of publicly available FEM implementations too complicated for a wider application of FEM in research studies. In this paper, we present a MATLAB-based pipeline that aims to resolve this lack of easy-to-use integrated software solutions. The presented pipeline allows for the easy application of five-compartment head models with the FEM within the FieldTrip toolbox for EEG source analysis.MethodsThe FEM from the SimBio toolbox, more specifically the St. Venant approach, was integrated into the FieldTrip toolbox. We give a short sketch of the implementation and its application, and we perform a source localization of somatosensory evoked potentials (SEPs) using this pipeline. We then evaluate the accuracy that can be achieved using the automatically generated five-compartment hexahedral head model [skin, skull, cerebrospinal fluid (CSF), gray matter, white matter] in comparison to a highly accurate tetrahedral head model that was generated on the basis of a semiautomatic segmentation with very careful and time-consuming manual corrections.ResultsThe source analysis of the SEP data correctly localizes the P20 component and achieves a high goodness of fit. The subsequent comparison to the highly detailed tetrahedral head model shows that the automatically generated five-compartment head model performs about as well as a highly detailed four-compartment head model (skin, skull, CSF, brain). This is a significant improvement in comparison to a three-compartment head model, which is frequently used in praxis, since the importance of modeling the CSF compartment has been shown in a variety of studies.ConclusionThe presented pipeline facilitates the use of five-compartment head models with the FEM for EEG source analysis. The accuracy with which the EEG forward problem can thereby be solved is increased compared to the commonly used three-compartment head models, and more reliable EEG source reconstruction results can be obtained.Electronic supplementary materialThe online version of this article (10.1186/s12938-018-0463-y) contains supplementary material, which is available to authorized users.
During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn’s end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.
a b s t r a c t a r t i c l e i n f oArticle history: Accepted 2 January 2015 Available online 9 January 2015 Keywords:Mu rhythms Beta rhythms Embodied meaning Action language Perceptive language EEG mu rhythms (8-13 Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another's action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language ("You will cut the strawberry cake"), abstract language ("You will doubt the patient's argument"), and perceptive language ("You will notice the bright day"). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.