2022
DOI: 10.1007/978-3-031-19775-8_7
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…Shi et al (2019) prune weight connections during training with a soft mask. Recently, deeper SNNs are pruned with ADMM optimization tool (Deng et al, 2021), gradient-based rewiring (Chen et al, 2021), and lottery ticket hypothesis (Kim et al, 2022b). Meanwhile, various quantization techniques also have been proposed to compress SNNs (Datta et al, 2022;Guo et al, 2022b;Li et al, 2022a;Meng et al, 2022).…”
Section: Compression Methods For E Cient Snnsmentioning
confidence: 99%
See 1 more Smart Citation
“…Shi et al (2019) prune weight connections during training with a soft mask. Recently, deeper SNNs are pruned with ADMM optimization tool (Deng et al, 2021), gradient-based rewiring (Chen et al, 2021), and lottery ticket hypothesis (Kim et al, 2022b). Meanwhile, various quantization techniques also have been proposed to compress SNNs (Datta et al, 2022;Guo et al, 2022b;Li et al, 2022a;Meng et al, 2022).…”
Section: Compression Methods For E Cient Snnsmentioning
confidence: 99%
“…As pruning for SNNs is popular due to its usage on edge devices (Neftci et al, 2016 ; Shi et al, 2019 ; Guo et al, 2020 ; Chen et al, 2021 ; Kim et al, 2022b ), it is important to figure out whether the advantage from EfficientLIF-Net remains in sparse SNNs.…”
Section: Methodsmentioning
confidence: 99%
“…The FI quantitatively measures the amount of information retained in a statistical model after being trained on a given data distribution [49]. Many prior works have used this metric to measure different aspects of deep learning models including SNN models [50,51]. Unlike prior works, we use pseudo-labels to generate FI instead of ground-truth labels.…”
Section: Evaluation Metrics 521 Fisher Information (Fi)mentioning
confidence: 99%
“…where, Tr(F) is the trace of FIM, ∇ is the partial derivative operator. We follow the same implementation as the algorithm specified in [51].…”
Section: Evaluation Metrics 521 Fisher Information (Fi)mentioning
confidence: 99%
“…These winning tickets can be unearthed through iterative or one-shot pruning methods, where parameters with the smallest magnitudes are systematically removed [25]. Since its inception, LTH has garnered significant attention and been extensively studied [12,[25][26][27]. Zhang et al [12] not only employed dynamical systems theory and inertial manifold theory to theoretically substantiate the efficacy of the lottery ticket hypothesis, but also proposed a practical, lossless pruning solution.…”
Section: Model Pruningmentioning
confidence: 99%