Understanding how one-shot learning can be accomplished through synaptic plasticity in neural networks of the brain is a major open problem. We propose that approximations to BPTT in recurrent networks of spiking neurons (RSNNs) such as e-prop cannot achieve this because their local synaptic plasticity is gated by learning signals that are rather ad hoc from a biological perspective: Random projections of instantaneously arising losses at the network outputs, analogously as in Broadcast Alignment for feedforward networks. In contrast, synaptic plasticity is gated in the brain by learning signals such as dopamine, which are emitted by specialized brain areas, e.g. VTA. These brain areas have arguably been optimized by evolution to gate synaptic plasticity in such a way that fast learning of survivalrelevant tasks is enabled. We found that a corresponding model architecture, where learning signals are emitted by a separate RSNN that is optimized to facilitate fast learning, enables one-shot learning via local synaptic plasticity in RSNNs for large families of learning tasks. The same learning approach also supports fast spike-based learning of posterior probabilities of potential input sources, thereby providing a new basis for probabilistic reasoning in RSNNs. Our new learning approach also solves an open problem in neuromorphic engineering, where onchip one-shot learning capability is highly desirable for spike-based neuromorphic devices, but could so far not be achieved. Our method can easily be mapped into neuromorphic hardware, and thereby solves this problem.Preprint. Under review.
In order to port the performance of trained artificial neural networks (ANNs) to spiking neural networks (SNNs), which can be implemented in neuromorphic hardware with a drastically reduced energy consumption, an efficient ANN to SNN conversion is needed. Previous conversion schemes focused on the representation of the analog output of a rectified linear (ReLU) gate in the ANN by the firing rate of a spiking neuron. But this is not possible for other commonly used ANN gates, and it reduces the throughput even for ReLU gates. We introduce a new conversion method where a gate in the ANN, which can basically be of any type, is emulated by a small circuit of spiking neurons, with At Most One Spike (AMOS) per neuron. We show that this AMOS conversion improves the accuracy of SNNs for ImageNet from 74.60% to 80.97%, thereby bringing it within reach of the best available ANN accuracy (85.0%). The Top5 accuracy of SNNs is raised to 95.82%, getting even closer to the best Top5 performance of 97.2% for ANNs. In addition, AMOS conversion improves latency and throughput of spike-based image classification by several orders of magnitude. Hence these results suggest that SNNs provide a viable direction for developing highly energy efficient hardware for AI that combines high performance with versatility of applications.
The capability to plan a sequence of actions toward a given goal is a cornerstone of higher cognitive function. But compelling models for planning in the brain, or more generally in any type of neural network, are missing. We present a simple model for planning in neural networks, the Cognitive Map Learner (CML), that can achieve high performance on a variety of tasks by learning a cognitive map of the environment. The way how the CML constructs a cognitive map is based on a fundamental insight from neuroscience: Observations from the environment acquire meaning for the organism primarily through performing actions that change them. The CML also provides a viable alternative to reinforcement learning in robotics, since it learns faster and becomes more flexible, due to its task-agnostic design principles. The design of the CML is also of interest from the perspective of the relationship between self-attention networks (Transformers) and neural networks, since it combines attractive features of both.
Genetically encoded structure endows neural networks of the brain with innate computational capabilities that enable odor classification and basic motor control right after birth. It is also conjectured that the stereotypical laminar organization of neocortical microcircuits provides basic computing capabilities on which subsequent learning can build. However, it has remained unknown how nature achieves this. Insight from artificial neural networks does not help to solve this problem, since their computational capabilities result from learning. We show that genetically encoded control over connection probabilities between different types of neurons suffices for programming substantial computing capabilities into neural networks. This insight also provides a method for enhancing computing and learning capabilities of artificial neural networks and neuromorphic hardware through clever initialization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.