2023
DOI: 10.48550/arxiv.2303.15479
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring the Performance of Pruning Methods in Neural Networks: An Empirical Study of the Lottery Ticket Hypothesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…The core innovation lies in reevaluating the pruning process and questioning established methodologies, opening the door to enhanced neural network compression techniques. 22 2015 weights weights magnitude Fladmark et al 34 2023 weights weights magnitude Junghun et al 35 2022 filters gradient magnitude Pruning at Onset Lee et al 35 2022 weights "dynamical isometry" Wang et al 36 2020 weights second-order derivative Tanaka et al 37 2020 weights "synaptic flow" Sparse Training Peng et al 38 2022 weights weights magnitude Chen et al 39 2021 weights weights magnitude Liu et al 40 2021 weights weights magnitude Learning by Masks Choudhary et al 41 2022 filters N/A Kusupati et al 42 2020 weights N/A Liu et al 43 2021 weights N/A Penalty Based Methods Zhu et al 44 2022 filters N/A Yeom et al 45 2021 weights weight magnitude Sanh et al 46 2020 filters weight magnitude Wang et al 47 2021 weights weight magnitude…”
Section: Pruning Methodsmentioning
confidence: 99%
“…The core innovation lies in reevaluating the pruning process and questioning established methodologies, opening the door to enhanced neural network compression techniques. 22 2015 weights weights magnitude Fladmark et al 34 2023 weights weights magnitude Junghun et al 35 2022 filters gradient magnitude Pruning at Onset Lee et al 35 2022 weights "dynamical isometry" Wang et al 36 2020 weights second-order derivative Tanaka et al 37 2020 weights "synaptic flow" Sparse Training Peng et al 38 2022 weights weights magnitude Chen et al 39 2021 weights weights magnitude Liu et al 40 2021 weights weights magnitude Learning by Masks Choudhary et al 41 2022 filters N/A Kusupati et al 42 2020 weights N/A Liu et al 43 2021 weights N/A Penalty Based Methods Zhu et al 44 2022 filters N/A Yeom et al 45 2021 weights weight magnitude Sanh et al 46 2020 filters weight magnitude Wang et al 47 2021 weights weight magnitude…”
Section: Pruning Methodsmentioning
confidence: 99%