2022
DOI: 10.1609/aaai.v36i2.20061
|View full text |Cite
|
Sign up to set email alerts
|

SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks

Abstract: Spiking Neural Networks (SNNs) have recently attracted enormous research interest since their event-driven and brain-inspired structure enables low-power computation. In image recognition tasks, the best results are achieved by SNN so far utilizing ANN-SNN conversion methods that replace activation functions in artificial neural networks~(ANNs) with integrate-and-fire neurons. Compared to source ANNs, converted SNNs usually suffer from accuracy loss and require a considerable number of time steps to achieve co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(12 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…The negative spike mentioned by Kim et al ( 2020 ) is only for modeling the negative part of the leaky-ReLU unit widely required in object detection, while our Ca-LIF neuron uses negative spikes to counter-balance the early emitted positive spikes so that when the net input z s in Equation (3b) aggregated over the entire time window T is negative, the final signed spike count can be zero, which thus closely emulates the quantized ReLU function in classification tasks, as explained in Section 3.1 and 3.2. Some previous ANN-to-SNN works do not adopt such methods but employed more complex threshold/weight balancing operations required to compensate for the early emitted positive spikes (Diehl et al, 2015 ; Rueckauer et al, 2017 ; Han et al, 2020 ; Ho and Chang, 2021 ; Liu et al, 2022 ). In this regard, although judging the sign of the spikes puts forward marginally additional computational overhead, it considerably eliminates the tedious post-conversion steps like threshold/weight balancing.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The negative spike mentioned by Kim et al ( 2020 ) is only for modeling the negative part of the leaky-ReLU unit widely required in object detection, while our Ca-LIF neuron uses negative spikes to counter-balance the early emitted positive spikes so that when the net input z s in Equation (3b) aggregated over the entire time window T is negative, the final signed spike count can be zero, which thus closely emulates the quantized ReLU function in classification tasks, as explained in Section 3.1 and 3.2. Some previous ANN-to-SNN works do not adopt such methods but employed more complex threshold/weight balancing operations required to compensate for the early emitted positive spikes (Diehl et al, 2015 ; Rueckauer et al, 2017 ; Han et al, 2020 ; Ho and Chang, 2021 ; Liu et al, 2022 ). In this regard, although judging the sign of the spikes puts forward marginally additional computational overhead, it considerably eliminates the tedious post-conversion steps like threshold/weight balancing.…”
Section: Methodsmentioning
confidence: 99%
“…A common indirect approach to overcome this problem is to train a structurally equivalent ANN model offline and then convert it to an SNN with the learned synaptic weights for inference, where the real values of inputs and outputs of ANN neurons correspond to the rates of presynaptic (input) and postsynaptic (output) spikes of the SNN neurons (Diehl et al, 2015 ; Hunsberger and Eliasmith, 2016 ; Rueckauer et al, 2017 ; Zhang et al, 2019 ; Han and Roy, 2020 ; Han et al, 2020 ; Kim et al, 2020 ; Lee et al, 2020 ; Yang et al, 2020 ; Deng and Gu, 2021 ; Dubhir et al, 2021 ; Ho and Chang, 2021 ; Hu et al, 2021 ; Kundu et al, 2021 ; Li et al, 2021b ; Bu et al, 2022 ; Liu et al, 2022 ). Although previous ANN-to-SNN techniques usually obtain state-of-the-art object recognition accuracies, they require complicated post-conversion fixations such as threshold balancing (Diehl et al, 2015 ; Rueckauer et al, 2017 ; Han et al, 2020 ; Liu et al, 2022 ), weight normalization (Diehl et al, 2015 ; Rueckauer et al, 2017 ; Ho and Chang, 2021 ), spike-norm (Sengupta et al, 2019 ), and channel-wise normalization (Kim et al, 2020 ), to compensate the behavioral discrepancies between artificial and spiking neurons. In addition, a few of those methods require a relatively long time window (e.g)., 2,500 algorithmic discrete time steps (Sengupta et al, 2019 ), allowing for sufficient spike emissions to precisely represent the real values of the equivalent ANNs.…”
Section: Introductionmentioning
confidence: 99%
“…We choose rate-based methods including p-Norm (Rueckauer et al, 2017 ), Spike-Norm (Sengupta et al, 2019 ), RMP-SNN (Han et al, 2020 ), Opt. (Deng and Gu, 2021 ), SpikeConverter (Liu et al, 2022 ), etc., phase-based Weighted Spikes (Kim et al, 2018 ) method, temporal coding-based TSC (Han and Roy, 2020 ) method, and other advanced methods such as CQ trained (Yan et al, 2021 ), Hybrid training (Rathi et al, 2020 ), etc. for comparison.…”
Section: Methodsmentioning
confidence: 99%
“…(Deng & Gu, 2021) divide conversion into floor and clip error from a new quantization perspective, (Li et al, 2021a) further optimize the conversion error. (Yu et al, 2021) construct the deep SNN with doublethreshold, (Liu et al, 2022) propose temporal separation to further zip gap between ANN and SNN. (Li & Zeng, 2022)…”
Section: Related Workmentioning
confidence: 99%