Background: Abnormal accumulation of amyloid β 1-42 oligomers (AβO 1-42), a hallmark of Alzheimer's disease, impairs hippocampal theta-nested gamma oscillations and long-term potentiation (LTP) that are believed to underlie learning and memory. Parvalbumin-positive (PV) and somatostatin-positive (SST) interneurons are critically involved in theta-nested gamma oscillogenesis and LTP induction. However, how AβO 1-42 affects PV and SST interneuron circuits is unclear. Through optogenetic manipulation of PV and SST interneurons and computational modeling of the hippocampal neural circuits, we dissected the contributions of PV and SST interneuron circuit dysfunctions on AβO 1-42-induced impairments of hippocampal theta-nested gamma oscillations and oscillation-induced LTP. Results: Targeted whole-cell patch-clamp recordings and optogenetic manipulations of PV and SST interneurons during in vivo-like, optogenetically induced theta-nested gamma oscillations in vitro revealed that AβO 1-42 causes synapse-specific dysfunction in PV and SST interneurons. AβO 1-42 selectively disrupted CA1 pyramidal cells (PC)-to-PV interneuron and PV-to-PC synapses to impair theta-nested gamma oscillogenesis. In contrast, while having no effect on PC-to-SST or SST-to-PC synapses, AβO 1-42 selectively disrupted SST interneuron-mediated disinhibition to CA1 PC to impair theta-nested gamma oscillation-induced spike timing-dependent LTP (tLTP). Such AβO 1-42-induced impairments of gamma oscillogenesis and oscillation-induced tLTP were fully restored by optogenetic activation of PV and SST interneurons, respectively, further supporting synapse-specific dysfunctions in PV and SST interneurons. Finally, computational modeling of hippocampal neural circuits including CA1 PC, PV, and SST interneurons confirmed the experimental observations and further revealed distinct functional roles of PV and SST interneurons in theta-nested gamma oscillations and tLTP induction. Conclusions: Our results reveal that AβO 1-42 causes synapse-specific dysfunctions in PV and SST interneurons and that optogenetic modulations of these interneurons present potential therapeutic targets for restoring hippocampal network oscillations and synaptic plasticity impairments in Alzheimer's disease.
The visual system of mammals is comprised of parallel, hierarchical specialized pathways. Different pathways are specialized in so far as they use representations that are more suitable for supporting specific downstream behaviours. In particular, the clearest example is the specialization of the ventral ('what') and dorsal ('where') pathways of the visual cortex. These two pathways support behaviours related to visual recognition and movement, respectively. To-date, deep neural networks have mostly been used as models of the ventral, recognition pathway. However, it is unknown whether both pathways can be modelled with a single deep ANN. Here, we ask whether a single model with a single loss function can capture the properties of both the ventral and the dorsal pathways. We explore this question using data from mice, who like other mammals, have specialized pathways that appear to support recognition and movement behaviours. We show that when we train a deep neural network architecture with two parallel pathways using a self-supervised predictive loss function, we can outperform other models in fitting mouse visual cortex. Moreover, we can model both the dorsal and ventral pathways. These results demonstrate that a self-supervised predictive learning approach applied to parallel pathway architectures can account for some of the functional specialization seen in mammalian visual systems.
We argue that an explanation of relevance realization is a pervasive problem within cognitive science, and that it is becoming the criterion of the cognitive in terms of which a new framework for doing cognitive science is emerging. We articulate that framework and then make use of it to provide the beginnings of a theory of relevance realization that incorporates many existing insights implicit within the contributing disciplines of cognitive science. We also introduce some theoretical and potentially technical innovations motivated by the articulation of those insights. Finally, we show how the explication of the framework and development of the theory help to clear up some important incompleteness and confusions within both Montague's work and Sperber and Wilson's theory of relevance.
In recent years, deep learning has unlocked unprecedented success in various domains, especially in image, text, and speech processing. These breakthroughs may hold promise for neuroscience and especially for brain-imaging investigators who start to analyze thousands of participants. However, deep learning is only beneficial if the data have nonlinear relationships and if they are exploitable at currently available sample sizes. We systematically profiled the performance of deep models, kernel models, and linear models as a function of sample size on UK Biobank brain images against established machine learning references. On MNIST and Zalando Fashion, prediction accuracy consistently improved when escalating from linear models to shallow-nonlinear models, and further improved when switching to deep-nonlinear models. The more observations were available for model training, the greater the performance gain we saw. In contrast, using structural or functional brain scans, simple linear models performed on par with more complex, highly parameterized models in age/sex prediction across increasing sample sizes. In fact, linear models kept improving as the sample size approached ~10,000 participants.Our results indicate that the increase in performance of linear models with additional data does not saturate at the limit of current feasibility. Yet, nonlinearities of common brain scans remain largely inaccessible to both kernel and deep learning methods at any examined scale. Exploratory Research Space (OPSF449), RWTH Aachen. Simulations were performed with computing resources granted by RWTH Aachen University under project number "rwth0238".
Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities – inherited from over 500 million years of evolution – that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.
Scientists have long conjectured that the neocortex learns the structure of the environment in a predictive, hierarchical manner. To do so, expected, predictable features are differentiated from unexpected ones by comparing bottom-up and top-down streams of data. It is theorized that the neocortex then changes the representation of incoming stimuli, guided by differences in the responses to expected and unexpected events. Such differences in cortical responses have been observed; however, it remains unknown whether these unexpected event signals govern subsequent changes in the brain’s stimulus representations, and, thus, govern learning. Here, we show that unexpected event signals predict subsequent changes in responses to expected and unexpected stimuli in individual neurons and distal apical dendrites that are tracked over a period of days. These findings were obtained by observing layer 2/3 and layer 5 pyramidal neurons in primary visual cortex of awake, behaving mice using two-photon calcium imaging. We found that many neurons in both layers 2/3 and 5 showed large differences between their responses to expected and unexpected events. These unexpected event signals also determined how the responses evolved over subsequent days, in a manner that was different between the somata and distal apical dendrites. This difference between the somata and distal apical dendrites may be important for hierarchical computation, given that these two compartments tend to receive bottom-up and top-down information, respectively. Together, our results provide novel evidence that the neocortex indeed instantiates a predictive hierarchical model in which unexpected events drive learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.