2022
DOI: 10.1007/978-3-031-20053-3_3
|View full text |Cite
|
Sign up to set email alerts
|

Neural Architecture Search for Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(19 citation statements)
references
References 61 publications
0
19
0
Order By: Relevance
“…Note that these datasets only have one label for each input sample, therefore top-1 accuracy is the same as the F-1 score. Similar to the SNN in image recognition tasks (Kim et al, 2022 ), the last layer of our SNN architecture is a fully connected layer. Therefore, we simply integrate all the membrane potentials in this layer for the softmax class prediction.…”
Section: Methodsmentioning
confidence: 99%
“…Note that these datasets only have one label for each input sample, therefore top-1 accuracy is the same as the F-1 score. Similar to the SNN in image recognition tasks (Kim et al, 2022 ), the last layer of our SNN architecture is a fully connected layer. Therefore, we simply integrate all the membrane potentials in this layer for the softmax class prediction.…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, wide network architectures allow reducing the accuracy loss when the number of timesteps is limited. This shows the importance of finding network architectures suitable for SNNs, which can be done using Neural Architecture Search [82]. Finally, the accuracy-latency trade-off in SNN should be carefully considered.…”
Section: Discussionmentioning
confidence: 99%
“…We deem this accuracy degradation as the cost of adopting an off-the-shelf QAT ANN training toolkit without dedicated optimizations toward low-latency inference as employed in Deng and Gu ( 2021 ), Bu et al ( 2022 ), and Li et al ( 2022b ). The recently emerged direct SNN training methods can also reach a relatively high accuracy while consuming much fewer time steps < 10 (Guo et al, 2021 , 2022a , b , c , d ; Deng et al, 2022 ; Kim et al, 2022 ; Li et al, 2022a ). However, evaluating direct SNN training methods is out of the scope of this article.…”
Section: Methodsmentioning
confidence: 99%