Aside from the language-selective left-lateralized frontotemporal network, language comprehension sometimes recruits a domain-general bilateral frontoparietal network implicated in executive functions: the multiple demand (MD) network. However, the nature of the MD network's contributions to language comprehension remains debated. To illuminate the role of this network in language processing in humans, we conducted a large-scale fMRI investigation using data from 30 diverse word and sentence comprehension experiments (481 unique participants [female and male], 678 scanning sessions). In line with prior findings, the MD network was active during many language tasks. Moreover, similar to the language-selective network, which is robustly lateralized to the left hemisphere, these responses were stronger in the left-hemisphere MD regions. However, in contrast with the language-selective network, the MD network responded more strongly (1) to lists of unconnected words than to sentences, and (2) in paradigms with an explicit task compared with passive comprehension paradigms. Indeed, many passive comprehension tasks failed to elicit a response above the fixation baseline in the MD network, in contrast to strong responses in the language-selective network. Together, these results argue against a role for the MD network in core aspects of sentence comprehension, such as inhibiting irrelevant meanings or parses, keeping intermediate representations active in working memory, or predicting upcoming words or structures. These results align with recent evidence of relatively poor tracking of the linguistic signal by the MD regions during naturalistic comprehension, and instead suggest that the MD network's engagement during language processing reflects effort associated with extraneous task demands.
Jackendoff, 1999). The latter suggested that the linguistic units people store are 1 determined not by their nature (i.e., atomic vs. not) but instead, by their patterns of usage 1
To understand what you are reading now, your mind retrieves the meanings of words and constructions from a linguistic knowledge store (lexico-semantic processing) and identifies the relationships among them to construct a complex meaning (syntactic or combinatorial processing). Do these two sets of processes rely on distinct, specialized mechanisms or, rather, share a common pool of resources? Linguistic theorizing, empirical evidence from language acquisition and processing, and computational modeling have jointly painted a picture whereby lexico-semantic and syntactic processing are deeply inter-connected and perhaps not separable. In contrast, many current proposals of the neural architecture of language continue to endorse a view whereby certain brain regions selectively support syntactic/combinatorial processing, although the locus of such “syntactic hub”, and its nature, vary across proposals. Here, we searched for selectivity for syntactic over lexico-semantic processing using a powerful individual-subjects fMRI approach across three sentence comprehension paradigms that have been used in prior work to argue for such selectivity: responses to lexico-semantic vs. morpho-syntactic violations (Experiment 1); recovery from neural suppression across pairs of sentences differing in only lexical items vs. only syntactic structure (Experiment 2); and same/different meaning judgments on such sentence pairs (Experiment 3). Across experiments, both lexico-semantic and syntactic conditions elicited robust responses throughout the left fronto-temporal language network. Critically, however, no regions were more strongly engaged by syntactic than lexico-semantic processing, although some regions showed the opposite pattern. Thus, contra many current proposals of the neural architecture of language, syntactic/combinatorial processing is not separable from lexico-semantic processing at the level of brain regions—or even voxel subsets—within the language network, in line with strong integration between these two processes that has been consistently observed in behavioral and computational language research. The results further suggest that the language network may be generally more strongly concerned with meaning than syntactic form, in line with the primary function of language—to share meanings across minds.
The frontotemporal language network responds robustly and selectively to sentences. But the features of linguistic input that drive this response and the computations that these language areas support remain debated. Two key features of sentences are typically confounded in natural linguistic input: words in sentences (a) are semantically and syntactically combinable into phrase- and clause-level meanings, and (b) occur in an order licensed by the language’s grammar. Inspired by recent psycholinguistic work establishing that language processing is robust to word order violations, we hypothesized that the core linguistic computation is composition, and, thus, can take place even when the word order violates the grammatical constraints of the language. This hypothesis predicts that a linguistic string should elicit a sentence-level response in the language network provided that the words in that string can enter into dependency relationships as in typical sentences. We tested this prediction across two fMRI experiments (total N = 47) by introducing a varying number of local word swaps into naturalistic sentences, leading to progressively less syntactically well-formed strings. Critically, local dependency relationships were preserved because combinable words remained close to each other. As predicted, word order degradation did not decrease the magnitude of the blood oxygen level–dependent response in the language network, except when combinable words were so far apart that composition among nearby words was highly unlikely. This finding demonstrates that composition is robust to word order violations, and that the language regions respond as strongly as they do to naturalistic linguistic input, providing that composition can take place.
The fronto-temporal language network responds robustly and selectively to sentences. But the features of linguistic input that drive this response and the computations these language areas support remain debated. Two key features of sentences are typically confounded in natural linguistic input: words in sentences a) are semantically and syntactically combinable into phrase-and clause-level meanings, and b) occur in an order licensed by the language's grammar. Inspired by recent psycholinguistic work establishing that language processing is robust to word order violations, we hypothesized that the core linguistic computation is composition, and, thus, can take place even when the word order violates the grammatical constraints of the language. This hypothesis predicts that a linguistic string should elicit a sentence-level response in the language network as long as the words in that string can enter into dependency relationships as in typical sentences. We tested this prediction across two fMRI experiments (total N=47) by introducing a varying number of local word swaps into naturalistic sentences, leading to progressively less syntactically well-formed strings. Critically, local dependency relationships were preserved because combinable words remained close to each other. As predicted, word order degradation did not decrease the magnitude of the BOLD response in the language network, except when combinable words were so far apart that composition among nearby words was highly unlikely. This finding demonstrates that composition is robust to word order violations, and that the language regions respond as strongly as they do to naturalistic linguistic input as long as composition can take place.
Two analytic traditions characterize fMRI language research. One relies on averaging activations across individuals. This approach has limitations: because of inter-individual variability in the locations of language areas, any given voxel/vertex in a common brain space is part of the language network in some individuals but in others, may belong to a distinct network. An alternative approach relies on identifying language areas in each individual using a functional ‘localizer’. Because of its greater sensitivity, functional resolution, and interpretability, functional localization is gaining popularity, but it is not always feasible, and cannot be applied retroactively to past studies. To bridge these disjoint approaches, we created a probabilistic functional atlas using fMRI data for an extensively validated language localizer in 806 individuals. This atlas enables estimating the probability that any given location in a common space belongs to the language network, and thus can help interpret group-level activation peaks and lesion locations, or select voxels/electrodes for analysis. More meaningful comparisons of findings across studies should increase robustness and replicability in language research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.