2022
DOI: 10.1109/tnnls.2021.3073016
|View full text |Cite
|
Sign up to set email alerts
|

LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 44 publications
(22 citation statements)
references
References 53 publications
1
21
0
Order By: Relevance
“…For ANNs, the two dimension convolutional neural network (2D-CNN) (Krizhevsky et al, 2012 ) has become a common tool for image classification. In order to train 2D-CNN on the ES-dataset, a common approach is to accumulate the events into event frames according to the time dimension and then reconstruct the gray images (Wu et al, 2020 ) for training. Here we use the Edge-Integral algorithm described in Figure 6 for reconstruction.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For ANNs, the two dimension convolutional neural network (2D-CNN) (Krizhevsky et al, 2012 ) has become a common tool for image classification. In order to train 2D-CNN on the ES-dataset, a common approach is to accumulate the events into event frames according to the time dimension and then reconstruct the gray images (Wu et al, 2020 ) for training. Here we use the Edge-Integral algorithm described in Figure 6 for reconstruction.…”
Section: Resultsmentioning
confidence: 99%
“…For SNNs, we choose an SNN based on leaky integrate-and-fire (LIF) neurons (Dayan and Abbott, 2001 ) and an SNN based on leaky integrate-and-analog-fire (LIAF) (Wu et al, 2020 ) neurons. Rate coding (Adrian and Zotterman, 1926 ) is used to decode the event information because the significance of the specific time when the spikes appear in this dataset is weaker than the number of spikes.…”
Section: Resultsmentioning
confidence: 99%
“…In LIAF models [1], the spike activation function F will have continuous output, while the membrane reset signal is still binary.…”
Section: A Modelsmentioning
confidence: 99%
“…When the target data is static data (like images), we prefer to use pipe-S, while we need to train LIF-SNN on dynamic data (like videos), we use pipe-D. Both of the sub-pipeline can be divided into three steps: training the original ANN model, fine-tuning on the transitional models (a spike-activation ANN or a LIAF-SNN [1]) and fine-tuning on the LIF model. Warmup (WU) step and sharpened ReLU are adopted as the smoothing methods, and the detailed process is introduced in the Method section.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation