2023
DOI: 10.3390/mi14071353
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of Field-Programmable Gate Array Platform for Object Classification Tasks Using Spike-Based Backpropagated Deep Convolutional Spiking Neural Networks

Abstract: This paper investigates the performance of deep convolutional spiking neural networks (DCSNNs) trained using spike-based backpropagation techniques. Specifically, the study examined temporal spike sequence learning via backpropagation (TSSL-BP) and surrogate gradient descent via backpropagation (SGD-BP) as effective techniques for training DCSNNs on the field programmable gate array (FPGA) platform for object classification tasks. The primary objective of this experimental study was twofold: (i) to determine t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 61 publications
0
1
0
Order By: Relevance
“…In [12], a simplified Leaky integrate-and-fire neuron model is used to develop an efficient SNN on Xilinx Virtex 6 FPGA. In a recent study [13], deep convolutional spiking neural networks (DCSNNs) were successfully implemented on low-power FPGA devices, where two backpropagation techniques were compared for object classification tasks. Additionally, another recent work [14] focuses on developing a customizable hardware accelerator for neural network inference models, specifically building a convolutional neural network on an FPGA.…”
Section: Related Workmentioning
confidence: 99%
“…In [12], a simplified Leaky integrate-and-fire neuron model is used to develop an efficient SNN on Xilinx Virtex 6 FPGA. In a recent study [13], deep convolutional spiking neural networks (DCSNNs) were successfully implemented on low-power FPGA devices, where two backpropagation techniques were compared for object classification tasks. Additionally, another recent work [14] focuses on developing a customizable hardware accelerator for neural network inference models, specifically building a convolutional neural network on an FPGA.…”
Section: Related Workmentioning
confidence: 99%