2023
DOI: 10.1109/tcad.2022.3213211
|View full text |Cite
|
Sign up to set email alerts
|

SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 33 publications
0
13
0
1
Order By: Relevance
“…Finally, we introduce the hardware platform that we design for carrying out the experiments on energy efficiency. We extend the overall architecture and PE design from Yin et al ( 2022 ) to support the necessary computation and data movement for our SNNs in HAR tasks. Owing to the 1D convolution and temporal dynamics that are naturally embedded in the time series data, the complexity of the hardware design has been largely reduced.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Finally, we introduce the hardware platform that we design for carrying out the experiments on energy efficiency. We extend the overall architecture and PE design from Yin et al ( 2022 ) to support the necessary computation and data movement for our SNNs in HAR tasks. Owing to the 1D convolution and temporal dynamics that are naturally embedded in the time series data, the complexity of the hardware design has been largely reduced.…”
Section: Methodsmentioning
confidence: 99%
“…The size of the PE array and global buffers are configurable according to different network structures. In this work, we set the number of PEs to 128, weight (W) buffer to 32 KB, and spike (S) buffer to 576 bytes, for matching with the dataflow used in Yin et al ( 2022 ). We briefly explain the computation and data movement flow below.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations