2023
DOI: 10.1609/aaai.v37i7.26002
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Temporal Information Dynamics in Spiking Neural Networks

Abstract: Most existing Spiking Neural Network (SNN) works state that SNNs may utilize temporal information dynamics of spikes. However, an explicit analysis of temporal information dynamics is still missing. In this paper, we ask several important questions for providing a fundamental understanding of SNNs: What are temporal information dynamics inside SNNs? How can we measure the temporal information dynamics? How do the temporal information dynamics affect the overall learning performance? To answer these questions, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 45 publications
(70 reference statements)
0
1
0
Order By: Relevance
“…Other general approaches to improve the processing efficiency of SNNs: A group of work re-used the network compression techniques via pruning and quantization for SNNs [14], [15], [16]. These techniques reduce the model size by reducing the neurons count (e.g., eliminating neurons with low firing rates), reducing the synaptic connectivity, or reducing the bit size of the network's parameters.…”
Section: Related Workmentioning
confidence: 99%
“…Other general approaches to improve the processing efficiency of SNNs: A group of work re-used the network compression techniques via pruning and quantization for SNNs [14], [15], [16]. These techniques reduce the model size by reducing the neurons count (e.g., eliminating neurons with low firing rates), reducing the synaptic connectivity, or reducing the bit size of the network's parameters.…”
Section: Related Workmentioning
confidence: 99%
“…Although this sample-wise timestep potentially limits parallel inference (batch size of 1 in (Li, Jones, and Furber 2023)), it demonstrates the potential of SNNs at very low latencies. Another work (Kim et al 2022) pointed out the phenomenon of temporal information concentration: during the training of SNNs, the effective information is gradually aggregated to the earlier timestep. The above work reveals the high temporal redundancy of existing SNNs, which raises a question: is it possible to reduce redundant timesteps in SNNs without sacrificing performance and parallelism?…”
Section: Introductionmentioning
confidence: 99%