We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available.
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Abstract-Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Graphics Processing Units (GPUs) can provide a low-cost, programmable, and highperformance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, Izhikevich neuron based large-scale SNN simulator that runs on a single GPU. The GPU-SNN model (running on an NVIDIA GTX-280 with 1GB of memory), is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7Hz. For simulation of 100K neurons with 10 Million synaptic connections, the GPU-SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and compact network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results were validated against CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. We intend to make our simulator available to the modeling community so that researchers will have easy access to large-scale SNN simulations.
Biological neural systems are well known for their robust and power-efficient operation in highly noisy environments. Biological circuits are made up of low-precision, unreliable and massively parallel neural elements with highly reconfigurable and plastic connections. Two of the most interesting properties of the neural systems are its self-organizing capabilities and its template architecture. Recent research in spiking neural networks has demonstrated interesting principles about learning and neural computation. Understanding and applying these principles to practical problems is only possible if large-scale spiking neural simulators can be constructed. Recent advances in low-cost multiprocessor architectures make it possible to build large-scale spiking network simulators. In this paper we review modeling abstractions for neural circuits and frameworks for modeling, simulating and analyzing spiking neural networks. Keywords-Spiking neural networks, GPU, vision, computational neuroscience, parallel processing, synapseI. 978-1-4244-6471-5/10/$26.00 c 2010 IEEE
In spiking neural networks, asynchronous spike events are processed in parallel by neurons. Emulations of such networks are traditionally computed by CPUs or realized using dedicated neuromorphic hardware. In many neuromorphic systems, the address-event-representation (AER) is used for spike communication. In this paper we present the acceleration of AER based spike processing using a graphics processing unit (GPU). In our experiment we interface a 128times128 pixel AER vision sensor to a spiking neural network implemented on a GPU for real-time convolution-based nonlinear feature extraction with convolution kernel sizes ranging from 48times48 to 112times112 pixels. We show parallelism-performance trade-offs on GPUs for single spike per thread, multiple spikes per thread, and multiple objects parallelism techniques. Our implementation can achieve a kernel speedup of up to 35times on a single NVIDIA GTX280 board when compared to a CPU-only implementation.Computing Spike-based Convolutions on GPUs
Brain Corporation is engaged in a multi-year project to build a spiking model of vision, paying special attention to the anatomy and physiology of the mammalian visual system. While it is relatively easy to hand-tune V1 to get simple and complex cells, it is not clear how to arrange connectivity in other cortical areas to get appropriate receptive fields, or what the appropriate receptive fields even should be. Instead of pre-wiring cortical connectivity according to a computational theory of how vision should work, we start with a generic "tabula rasa" spiking network having multiple cortical layers and neuronal types (singlecompartment RS, FS, LTS cells). The goal is to find the anatomical and physiological parameters so that the appropriate connectivity emerges through STDP and visual experience. Since we know exactly what kind of receptive fields and visual responses are in V1, we build a smaller model of retina-LGN-V1 pathway and tune the STDP parameters so that the expected responses emerge. Admittedly, there are many free parameters that could be tuned to get V1-like responses; our choice is motivated by in-vitro recordings and other published data, whenever possible. Once we trust what we see in V1, we are ready to copy and paste the cortical model to implement V2, V3, V4, and IT areas with the hope that useful connectivity, receptive fields, and visual responses emerge. Our largescale simulations of the spiking model of the visual system show spontaneous emergence of simple and complex cells, orientation domains, end-stopping receptive fields, extraclassical receptive fields with tuned surround suppression, color opponency that depends on the eccentricity of the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.