Neuromorphic hardware platforms implement biological neurons and synapses to execute spiking neural networks (SNNs) in an energy-efficient manner. We present SpiNeMap, a design methodology to map SNNs to crossbar-based neuromorphic hardware, minimizing spike latency and energy consumption. SpiNeMap operates in two steps: SpiNeCluster and SpiNePlacer. SpiNeCluster is a heuristic-based clustering technique to partition SNNs into clusters of synapses, where intracluster local synapses are mapped within crossbars of the hardware and inter-cluster global synapses are mapped to the shared interconnect. SpiNeCluster minimizes the number of spikes on global synapses, which reduces spike congestion on the shared interconnect, improving application performance. SpiNePlacer then finds the best placement of local and global synapses on the hardware using a meta-heuristic-based approach to minimize energy consumption and spike latency. We evaluate SpiNeMap using synthetic and realistic SNNs on the DynapSE neuromorphic hardware. We show that SpiNeMap reduces average energy consumption by 45% and average spike latency by 21%, compared to state-of-the-art techniques.
We present PyCARL, a PyNN-based common Python programming interface for hardware-software cosimulation of spiking neural network (SNN). Through PyCARL, we make the following two key contributions. First, we provide an interface of PyNN to CARLsim, a computationallyefficient, GPU-accelerated and biophysically-detailed SNN simulator. PyCARL facilitates joint development of machine learning models and code sharing between CARLsim and PyNN users, promoting an integrated and larger neuromorphic community. Second, we integrate cycle-accurate models of state-of-the-art neuromorphic hardware such as TrueNorth, Loihi, and Dy-napSE in PyCARL, to accurately model hardware latencies that delay spikes between communicating neurons and degrade performance. PyCARL allows users to analyze and optimize the performance difference between software-only simulation and hardware-software co-simulation of their machine learning models. We show that system designers can also use PyCARL to perform design-space exploration early in the product development stage, facilitating faster time-to-deployment of neuromorphic products. We evaluate the memory usage and simulation time of PyCARL using functionality tests, synthetic SNNs, and realistic applications. Our results demonstrate that for large SNNs, PyCARL does not lead to any significant overhead compared to CARLsim. We also use PyCARL to analyze these SNNs for a state-of-the-art neuromorphic hardware and demonstrate a significant performance deviation from software-only simulations. PyCARL allows to evaluate and minimize such differences early during model development.
Heartbeat classification using electrocardiogram (ECG) data is an essential feature of modern day wearable devices. State-of-the-art machine learning-based heartbeat classifiers are designed using convolutional neural networks (CNN). Despite their high classification accuracy, CNNs require significant computational resources and power. This makes the mapping of CNNs on resource-and power-constrained wearable devices challenging. In this paper, we propose heartbeat classification using spiking neural networks (SNN), an alternative approach based on a biologically inspired, event-driven neural networks. SNNs compute and transfer information using discrete spikes that require fewer operations and less complex hardware resources, making them energy-efficient compared to CNNs. However, due to complex error-backpropagation involving spikes, supervised learning of deep SNNs remains challenging. We propose an alternative approach to SNN-based heartbeat classification. We start with an optimized CNN implementation of the heartbeat classification task and then convert the CNN operations, such as multiply-accumulate, pooling and softmax, into spiking equivalent with a minimal loss of accuracy. We evaluate the SNN-based heartbeat classification using publicly available ECG database of the Massachusetts Institute of Technology and Beth Israel Hospital (MIT/BIH), and demonstrate a minimal loss in accuracy when compared to 85.92% accuracy of a CNN-based hearbeat classification. We demonstrate that, for every operation, the activation of SNN neurons in each layer is sparse when compared to CNN neurons, in the same layer. We also show that this sparsity increases with an increase in the number of layers of the neural network. In addition, we detail the power-accuracy trade-off of the SNN and show a 87.76% and 96.82% reduction in SNN neuron and synapse activity,respectively, for accuracy loss ranging between 0.6% and 1.00%, when compared to a CNN-only implementation.
With growing model complexity, mapping Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is because the synaptic storage resources on a tile, viz. a crossbar, can accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources, leading to a loss in model quality, e.g., accuracy. In this work, we propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units, where each neural unit is a function computation node, with two pre-synaptic connections. This spatial decomposition technique significantly improves crossbar utilization and retains all pre-synaptic connections, resulting in no loss of the model quality derived from connection pruning. We integrate the proposed technique within an existing SNN mapping framework and evaluate it using machine learning applications on the DYNAP-SE state-of-the-art neuromorphic hardware. Our results demonstrate an average 60% lower crossbar requirement, 9x higher synapse utilization, 62% lower wasted energy on the hardware, and between 0.8% and 4.6% increase in model quality.
Neuromorphic architectures with non-volatile memory (NVM) implement biological neurons and synapses to execute spiking neural networks (SNNs). To access synaptic weights, an NVM cell's peripheral circuit drives current through the cell using a high bias voltage, generated from an on-chip charge pump. High-voltage operations induce aging of CMOS devices in the charge pump, leading to negative bias temperature instability (NBTI) and hot carrier injection (HCI) generated defects. Therefore, charge-pump aging poses a significant threat to the operating lifetime of neuromorphic architectures. Discharging a stressed charge pump periodically can lower its aging rate, but makes the architecture unavailable to process spikes while its charge pumps are being discharged. This introduces delay in spike propagation, which impacts inter-spike interval (ISI), leading to information loss and challenging the integrity of SNNs. This performance and lifetime trade-off depends on the SNN workload being executed. In this paper, we propose a novel framework to exploit workload-specific performance and lifetime trade-offs in neuromorphic computing. Our framework first extracts the precise times at which spikes are generated on all synapses of a SNN workload. This timing information is then used wihtin a new analytical formulation to estimate aging of charge pumps based on the SNN's mapping to the hardware and the power delivery architecture of charge pumps. We use the developed framework to optimize the mapping of neurons and synapses at design time and to schedule the discharge of stressed charge pumps at run time to maximize their lifetime, without significantly hurting the workload's performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.