Simulation of large networks of neurons is a powerful and increasingly prominent methodology for investigate brain functions and structures. Dedicated parallel hardware is a natural candidate for simulating the dynamic activity of many non-linear units communicating asynchronously. It is only scientifically useful, however, if the simulation tools can be configured and run easily and quickly. We present a method to map network models to computational nodes on the SpiNNaker system, a programmable parallel neurallyinspired hardware architecture, by exploiting the hierarchies built in the model. This PArtitioning and Configuration MANager (PACMAN) system supports arbitrary network topologies and arbitrary membrane potential and synapse dynamics, and (most importantly) decouples the model from the device, allowing a variety of languages (PyNN, Nengo, etc.) to drive the simulation hardware. Model representation operates on a Population/Projection level rather than a single-neuron and connection level, exploiting hierarchical properties to lower the complexity of allocating resources and mapping the model onto the system. PACMAN can be thus be used to generate structures coming from different models and front-ends, either with a host-based process, or by parallelising it on the SpiNNaker machine itself to speed up the generation process greatly. We describe the approach with a first implementation of the framework used to configure the current generation of SpiNNaker machines and present results from a set of key benchmarks. The system allows researchers to exploit dedicated simulation hardware which may otherwise be difficult to program. In effect, PAC-MAN provides automated hardware acceleration for
Abstract-This paper presents an efficient approach for implementing spike-timing-dependent plasticity (STDP) on the SpiNNaker neuromorphic hardware. The event-address mapping and the distributed synaptic weight storage schemes used in parallel neuromorphic hardware such as SpiNNaker make the conventional pre-post-sensitive scheme of STDP implementation inefficient, since STDP is triggered when either a pre-or post-synaptic neuron fires. An alternative pre-sensitive scheme approach is presented to solve this problem, where STDP is triggered only when a pre-synaptic neuron fires. An associated deferred event-driven model is developed to enable the presensitive scheme by deferring the STDP process until there are sufficient history spike timing records. The paper gives detailed description of the implementation as well as performance estimation of STDP on multi-chip SpiNNaker machine, along with the discussion on some issues related to efficient STDP implementation on a parallel neuromorphic hardware.
Abstract-Large-scale neural hardware systems are trending increasingly towards the "neuromimetic" architecture: a general-purpose platform that specialises the hardware for neural networks but allows flexibility in model choice. Since the model is not hard-wired into the chip, exploration of different neural and synaptic models is not merely possible but provides a rich field for research: the possibility to use the hardware to establish useful abstractions of biological neural dynamics that could lead to a functional model of neural computation. Two areas of neural modelling stand out as central: 1) What level of detail in the neurodynamic model is necessary to achieve biologically realistic behaviour? 2) What is role and effect of different types of synapses in the computation? Using a universal event-driven neural chip, SpiNNaker, we develop a simple model, the Leaky-Integrate-and-Fire (LIF) neuron, as a tool for exploring the second of these questions, complementary to the existing Izhikevich model which allows exploration of the first of these questions. The LIF model permits the development of multiple synaptic models including fast AMPA/GABA-A synapses with or without STDP learning, and slow NMDA synapses, spanning a range of different dynamic time constants. Its simple dynamics make it possible to expand the complexity of synaptic response, while the general-purpose design of SpiNNaker makes it possible if necessary to increase the neurodynamic accuracy with Izhikevich (or even HodgkinHuxley) neurons with some tradeoff in model size. Furthermore, the LIF model is a universally-accepted "standard" neural model that provides a good basis for comparisons with software simulations and introduces minimal risk of obscuring important synaptic effects due to unusual neurodynamics. The simple models run thus far demonstrate the viability of both the LIF model and of various possible synaptic models on SpiNNaker and illustrate how it can be used as a platform for model exploration. Such an architecture provides a scalable system for high-performance large-scale neural modelling with complete freedom in model choice. I. UNIVERSAL NEURAL HARDWARE: THE NEED FOR MODEL LIBRARIES WHAT is the purpose of dedicated neural network hardware? In the past, the answer was easy: model acceleration. However, rapid progress in standard digital hardware often outpaced the development cycle for firstgeneration neural chips, so that by the time they were available software simulators could outperform them anyway however, the rate of progress in standard digital hardware is flattening out, and the configurable-model concept that the FPGA introduced has given rise to new neural devices that offer various levels of model configurability without the scalability problems of the FPGA: "neuromimetic" chips [5]. These chips alter the nature of the research question: now it is not simply a matter of how to scale a neural model to large sizes but what model to scale.The question of which neural network models are the "right" ones depends rath...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.