The diversity of cognitive deficits and neuropathological processes associated with dementias has encouraged divergence in pathophysiological explanations of disease. Here, we review an alternative framework that emphasises convergent critical features of cognitive pathophysiology. Rather than the loss of “memory centres” or “language centres”, or singular neurotransmitter systems, cognitive deficits are interpreted in terms of aberrant predictive coding in hierarchical neural networks. This builds on advances in normative accounts of brain function, specifically the Bayesian integration of beliefs and sensory evidence in which hierarchical predictions and prediction errors underlie memory, perception, speech and behaviour. We describe how analogous impairments in predictive coding in parallel neurocognitive systems can generate diverse clinical phenomena, including the characteristics of dementias. The review presents evidence from behavioural and neurophysiological studies of perception, language, memory and decision-making. The re-formulation of cognitive deficits in terms of predictive coding has several advantages. It brings diverse clinical phenomena into a common framework; it aligns cognitive and movement disorders; and it makes specific predictions on cognitive physiology that support translational and experimental medicine studies. The insights into complex human cognitive disorders from the predictive coding framework may therefore also inform future therapeutic strategies.
Spoken word recognition in context is remarkably fast and accurate, with recognition times of ϳ200 ms, typically well before the end of the word. The neurocomputational mechanisms underlying these contextual effects are still poorly understood. This study combines source-localized electroencephalographic and magnetoencephalographic (EMEG) measures of real-time brain activity with multivariate representational similarity analysis to determine directly the timing and computational content of the processes evoked as spoken words are heard in context, and to evaluate the respective roles of bottom-up and predictive processing mechanisms in the integration of sensory and contextual constraints. Male and female human participants heard simple (modifier-noun) English phrases that varied in the degree of semantic constraint that the modifier (W1) exerted on the noun (W2), as in pairs, such as "yellow banana." We used gating tasks to generate estimates of the probabilistic predictions generated by these constraints as well as measures of their interaction with the bottom-up perceptual input for W2. Representation similarity analysis models of these measures were tested against electroencephalographic and magnetoencephalographic brain data across a bilateral fronto-temporo-parietal language network. Consistent with probabilistic predictive processing accounts, we found early activation of semantic constraints in frontal cortex (LBA45) as W1 was heard. The effects of these constraints (at 100 ms after W2 onset in left middle temporal gyrus and at 140 ms in left Heschl's gyrus) were only detectable, however, after the initial phonemes of W2 had been heard. Within an overall predictive processing framework, bottom-up sensory inputs are still required to achieve early and robust spoken word recognition in context.
The processing of words containing inflectional affixes triggers morphophonological parsing and affix-related grammatical information processing. Increased perceptual complexity related to stem-affix parsing is hypothesized to create predominantly domain-general processing demands, whereas grammatical processing primarily implicates domain-specific linguistic demands. Exploiting the properties of Russian morphology and syntax, we designed an fMRI experiment to separate out the neural systems supporting these two demand types, contrasting inflectional complexity, syntactic (phrasal) complexity, and derivational complexity in three comparisons: (a) increase in parsing demands while controlling for grammatical complexity (inflections vs. phrases), (b) increase in grammatical processing demands, and (c) combined demands of morphophonological parsing and grammatical processing (inflections and phrases vs. derivations). Left inferior frontal and bilateral temporal areas are most active when the two demand types are combined, with inflectional and phrasal complexity contrasting strongly with derivational complexity (which generated only bilateral temporal activity). Increased stem-affix parsing demands alone did not produce unique activations, whereas grammatical structure processing activated bilateral superior and middle temporal areas. Selective left frontotemporal language system engagement for short phrases and inflections seems to be driven by simultaneous and interdependent domain-general and domain-specific processing demands.
Language comprehension relies on a multitude of domain-general and domain-specific cognitive operations. This study asks whether the domain-specific grammatical computations are obligatorily invoked whenever we process linguistic input. Using fMRI and three complementary measures of neural activity, we tested how domain-general and domain-specific demands of single word comprehension engage cortical language networks, and whether the left frontotemporal network (commonly taken to support domain-specific grammatical computations) automatically processes grammatical information present in inflectionally complex words. In a natural listening task, participants were presented with words that manipulated domain-general and domain-specific processing demands in a 2 x 2 manner. The results showed that only domain-general demands of mapping words onto their representations consistently engaged the language processing system during single word comprehension, triggering increased activity and connectivity in bilateral frontotemporal regions, as well as bilateral encoding across multivoxel activity patterns. In contrast, inflectional complexity failed to activate left frontotemporal regions in this task, implying that domain-specific grammatical processing in the left hemisphere is not automatically triggered when the processing context does not specifically require such analysis. This suggests that cortical computations invoked by language processing critically depend on the current communicative goals and demands, underlining the importance of domain-general processes in language comprehension, and arguing against the strong domain-specific view of the LH network function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.