Abstract:In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyra… Show more
“…Neural networks that use biologically plausible neurons and learning mechanisms have become the focus of a number of recent pattern recognition studies [1,2,3]. Spiking neurons and adaptive synapses between neurons contribute to a new approach in cognition, decision making, and learning [4][5][6][7][8].…”
“…Neural networks that use biologically plausible neurons and learning mechanisms have become the focus of a number of recent pattern recognition studies [1,2,3]. Spiking neurons and adaptive synapses between neurons contribute to a new approach in cognition, decision making, and learning [4][5][6][7][8].…”
“…While this is probably acceptable for many practically relevant systems, many other systems might need to adjust to a specific user or a new environment. In such cases it would be necessary to use neuromorphic hardware that brings efficient online learning [5], [21], [22] in combination with online learning algorithms [23], [24], [25], [26], [27]. However, so far it remains a challenge to achieve performances comparable to conversion methods using such online learning approaches.…”
Abstract-We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response. Specifically, we perform a fine-grained sentiment analysis task with implementations on two different systems: one using conventional spiking neural network (SNN) simulators and the other one using IBM's Neurosynaptic System TrueNorth. Input words are projected into a high-dimensional semantic space and processed through a fully-connected neural network (FCNN) containing rectified linear units trained via backpropagation. After training, this FCNN is converted to a SNN by substituting the ReLUs with integrate-and-fire neurons. We show that there is practically no performance loss due to conversion to a spiking network on a sentiment analysis test set, i.e. correlations between predictions and human annotations differ by less than 0.02 comparing the original DNN and its spiking equivalent. Additionally, we show that the SNN generated with this technique can be mapped to existing neuromorphic hardware -in our case, the TrueNorth chip. Mapping to the chip involves 4-bit synaptic weight discretization and adjustment of the neuron thresholds. The resulting end-to-end system can take a user input, i.e. a word in a vocabulary of over 300,000 words, and estimate its sentiment on TrueNorth with a power consumption of approximately 50 µW .
“…It will be interesting to investigate the effects of local structure in non-deterministic finite automata, as in this case algorithmic minimization of a given automaton is much more expensive than in the deterministic case [1]. Thus, similar state space optimization techniques might prove to be advantageous for neural models of probabilistic decision making [2], [3]. Maze task, in which a virtual agent has to find a target position, using visual cues.…”
Section: Discussionmentioning
confidence: 99%
“…FSMs can model many aspects of high level deterministic behavior, such as production of movement sequences, navigation, state-dependent decision making, log ical reasoning, or understanding and production of language. Although many neural processes can be better modeled by probabilistic graphical models, taking into account the inherent environmental and neural stochasticity [2], [3], almost deter ministic sequences of neural activation have been observed in brains of various species and during diverse activities. Examples include synfire chains [4], sequences during song production in birds [5], or location-dependent patterns during navigation in rats [6].…”
Deterministic behavior can be modeled convenientl y in the framework of finite automata. We present a recurrent neural network model based on biologicall y plausible circuit motifs that can learn deterministic transition models from given input sequences. Furthermore, we introduce simple structural constraints on the connectivit y that are inspired b y biolog y .Simulation results show that this leads to great improvements in terms of training time, and efficient use of resources in the con verged s y stem. Previous work has shown how specific instances of finite-state machines (FSMs) can be s y nthesized in recurrent neural networks b y interconnecting multiple soft winner-take-all (SWTA) circuits -small circuits that can faithfull y reproduce man y computational properties of cortical networks. We extend this framework with a reinforcement learning mechanism to learn correct state transitions as input and reward signals are provided.Not onl y does the network learn a model for the observed sequences, and encode it in the recurrent s y naptic weights, it also finds solutions that are close-to-optimal in the number of states required to model the target s y stem, leading to efficient scaling behavior as the size of the target problems increases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.