Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.
Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale ( n = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain's electrophysiological index of semantic processing. A spatio-temporally fine-grained mixed-effect multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatio-temporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.
Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but these studies do not permit precise localisation of this function. The current investigation used multiple imaging methods in healthy participants to examine functional dissociations within ATL. Multi-voxel pattern analysis identified spatially segregated regions: a response to input modality in anterior superior temporal gyrus (aSTG) and a response to meaning in more ventral anterior temporal lobe (vATL). This functional dissociation was supported by resting-state connectivity that found greater coupling for aSTG with primary auditory cortex and vATL with the default mode network. A meta-analytic decoding of these connectivity patterns implicated aSTG in processes closely tied to auditory processing (such as phonology and language) and vATL in meaning-based tasks (such as comprehension or social cognition). Thus we provide converging evidence for the segregation of meaning and input modality in the ATL.
HighlightsOverlap between semantic control and action understanding revealed with fMRI.Overlap found in left inferior frontal and posterior middle temporal cortex.Peaks for action and difficulty were spatially identical in LIFG.Peaks for action and difficulty were distinct in occipital–temporal cortex.Difficult trials recruited additional ventral occipital–temporal areas.
In current theories of language comprehension, people routinely and implicitly predict upcoming words by pre-activating their meaning, morpho-syntactic features and even their specific phonological form. To date the strongest evidence for this latter form of linguistic prediction comes from a 2005 Nature Neuroscience landmark publication by DeLong, Urbach and Kutas, who observed a graded modulation of article- and noun-elicited electrical brain potentials (N400) by the pre-determined probability that people continue a sentence fragment with that word ('cloze'). In a direct replication study spanning 9 laboratories (N=334), we failed to replicate the crucial article-elicited N400 modulation by cloze, while we successfully replicated the commonly-reported noun-elicited N400 modulation. This pattern of failure and success was observed in a pre-registered replication analysis, a pre-registered single-trial analysis, and in exploratory Bayesian analyses. Our findings do not support a strong prediction view in which people routinely pre-activate the phonological form of upcoming words, and suggest a more limited role for prediction during language comprehension.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.