The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs), enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a Bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%–77% (mean for all odors 45.2%) but was always significantly above chance (5.6%). However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking) assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.
It is often assumed that Hebbian synaptic plasticity forms a cell assembly, a mutually interacting group of neurons that encodes memory. However, in recurrently connected networks with pure Hebbian plasticity, cell assemblies typically diverge or fade under ongoing changes of synaptic strength. Previously assumed mechanisms that stabilize cell assemblies do not robustly reproduce the experimentally reported unimodal and long-tailed distribution of synaptic strengths. Here, we show that augmenting Hebbian plasticity with experimentally observed intrinsic spine dynamics can stabilize cell assemblies and reproduce the distribution of synaptic strengths. Moreover, we posit that strong intrinsic spine dynamics impair learning performance. Our theory explains how excessively strong spine dynamics, experimentally observed in several animal models of autism spectrum disorder, impair learning associations in the brain.
Abnormal gamma band power across cortex and striatum is an important phenotype of Huntington’s disease (HD) in both patients and animal models, but neither the origin nor the functional relevance of this phenotype is well understood. Here, we analyzed local field potential (LFP) activity in freely behaving, symptomatic R6/2 and Q175 mouse models and corresponding wild-type (WT) controls. We focused on periods of quiet rest, which show strong γ activity in HD mice. Simultaneous recording from motor cortex and its target area in dorsal striatum in the R6/2 model revealed exaggerated functional coupling over that observed in WT between the phase of delta frequencies (1–4 Hz) in cortex and striatum and striatal amplitude modulation of low γ frequencies (25–55 Hz; i.e., phase-amplitude coupling, PAC), but no evidence that abnormal cortical activity alone can account for the increase in striatal γ power. Both HD mouse models had stronger coupling of γ amplitude to δ phase and more unimodal phase distributions than their WT counterparts. To assess the possible role of striatal fast-spiking interneurons (FSIs) in these phenomena, we developed a computational model based on additional striatal recordings from Q175 mice. Changes in peak γ frequency and power ratio were readily reproduced by our computational model, accounting for several experimental findings reported in the literature. Our results suggest that HD is characterized by both a reorganization of cortico-striatal drive and specific population changes related to intrastriatal synaptic coupling.
It is often assumed that Hebbian synaptic plasticity forms a cell assembly, a mutually interacting group of neurons that encodes memory. However, in recurrently connected networks with pure Hebbian plasticity, cell assemblies typically diverge or fade under ongoing changes of synaptic strength. Previously assumed mechanisms that stabilize cell assemblies do not robustly reproduce the experimentally reported unimodal and long-tailed distribution of synaptic strengths. Here, we show that augmenting Hebbian plasticity with experimentally observed intrinsic spine dynamics can stabilize cell assemblies and reproduce the distribution of synaptic strengths. Moreover, we posit that strong intrinsic spine dynamics impair learning performance. Our theory explains how excessively strong spine dynamics, experimentally observed in several animal models of autism spectrum disorder, impair learning associations in the brain.
It has previously been shown that by using spike-timing-dependent plasticity (STDP), neurons can adapt to the beginning of a repeating spatio-temporal firing pattern in their input. In the present work, we demonstrate that this mechanism can be extended to train recognizers for longer spatio-temporal input signals. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feed-forwardly connected in such a way that both the correct input segment and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. We show that nearest-neighbor STDP (where only the pre-synaptic spike most recent to a post-synaptic one is considered) leads to “nearest-neighbor” chains where connections only form between subsequent states in a chain (similar to classic “synfire chains”). In contrast, “all-to-all spike-timing-dependent plasticity” (where all pre- and post-synaptic spike pairs matter) leads to multiple connections that can span several temporal stages in the chain; these connections respect the temporal order of the neurons. It is also demonstrated that previously learnt individual chains can be “stitched together” by repeatedly presenting them in a fixed order. This way longer sequence recognizers can be formed, and potentially also nested structures. Robustness of recognition with respect to speed variations in the input patterns is shown to depend on rise-times of post-synaptic potentials and the membrane noise. It is argued that the memory capacity of the model is high, but could theoretically be increased using sparse codes.
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the beginning of a repeated spatio-temporal pattern among its afferents. In this study we adduce that such learning is dependent on background activity, and is un-stable when in a noisy framework. We also present insights into the neuron's encoding.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.