2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9534111
|View full text |Cite
|
Sign up to set email alerts
|

Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 22 publications
0
3
0
1
Order By: Relevance
“…SNNs, unlike their non-spiking counterparts, consist of a temporal dimension. Along with considering temporal information, a spatial and temporal pruning of SNNs is proposed in Chowdhury et al (2021). Generally speaking, pruning will cause accuracy degradation to some extent.…”
Section: Parameter Pruningmentioning
confidence: 99%
“…SNNs, unlike their non-spiking counterparts, consist of a temporal dimension. Along with considering temporal information, a spatial and temporal pruning of SNNs is proposed in Chowdhury et al (2021). Generally speaking, pruning will cause accuracy degradation to some extent.…”
Section: Parameter Pruningmentioning
confidence: 99%
“…Schaefer and Joshi ( 2020 ) propose integer fixed-point representations for neural dynamics, weights, loss, and gradients. The recent work (Chowdhury et al, 2021a ) performs quantization through temporal dimension for low-latency SNNs. Lui and Neftci propose a quantization technique based on the Hessian of weights (Lui and Neftci, 2021 ).…”
Section: Related Workmentioning
confidence: 99%
“…Zhang 等 [22] 提出了一种异步脉冲卷积核长程结构投射, 增强 SNN 的编 码, 加快学习, 降低能耗. Chowdhury 等 [23] 基于主成分分析法分析各层显著性, 提出了一种时空剪枝 方法. 这些网络剪枝方法均在模型压缩率和性能方面证明了有效性, 均属于模型压缩方法, 即通过对 已有网络结构进行压缩以达到减小网络规模的目的.…”
Section: 脉冲神经网络模型压缩方法unclassified