2021
DOI: 10.3390/s21186006
|View full text |Cite
|
Sign up to set email alerts
|

A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps

Abstract: Neuromorphic hardware systems have been gaining ever-increasing focus in many embedded applications as they use a brain-inspired, energy-efficient spiking neural network (SNN) model that closely mimics the human cortex mechanism by communicating and processing sensory information via spatiotemporally sparse spikes. In this paper, we fully leverage the characteristics of spiking convolution neural network (SCNN), and propose a scalable, cost-efficient, and high-speed VLSI architecture to accelerate deep SCNN in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…This paper adopts the original single-bit format to represent the binary spikes. At any discrete timestep t in the digitalized SCNN, the output spikes of all the neurons in one channel of the convolutional layer can be considered a timestep snapshot in the form of a binary map [36]. In this case, the inputcurrent integration phase computation process of the SNNs is almost the same as that of the traditional ANNs except for the additional time dimension and the changed operation.…”
Section: B Dataflow and Parallelism Scheme For Scnnmentioning
confidence: 99%
“…This paper adopts the original single-bit format to represent the binary spikes. At any discrete timestep t in the digitalized SCNN, the output spikes of all the neurons in one channel of the convolutional layer can be considered a timestep snapshot in the form of a binary map [36]. In this case, the inputcurrent integration phase computation process of the SNNs is almost the same as that of the traditional ANNs except for the additional time dimension and the changed operation.…”
Section: B Dataflow and Parallelism Scheme For Scnnmentioning
confidence: 99%
“…Zhang et al. [ 13 ] propose a scalable, cost-efficient, and high-speed VLSI architecture to accelerate deep spiking convolution neural networks (SCNN). The neuromorphic hardware typically consists of multiple cores and each core can only accommodate a limited number of neurons.…”
Section: Introductionmentioning
confidence: 99%
“…A research is booming in using LIF spiking networks for online learning 27 , braille letter reading 28 , different neuromorphic synaptic devices 29 for detection and classification of biological problems [30][31][32][33][34][35][36] . Significant research is focused on making human-level control 37 , optimizing back-propagation algorithms for spiking networks [38][39][40] , as well as penetrating much deeper into ARCSes core [41][42][43][44] with smaller number of time steps 41 , using an event-driven paradigm 36,40,45,46 , applying batch normalization 47 , scatter-and-gather optimizations 48 , supervised plasticity 49 , time-step binary maps 50 , and using transfer learning algorithms 51 . In concert with this broad range of software applications, there is a huge amount of research directed at developing and using these LIF SNN in embedded applications with the help of the neuromorphic hardware [52][53][54][55][56][57] , the generic name given to hardware that is nominally based on, or inspired by, the structure and function of the human brain.…”
mentioning
confidence: 99%