2024
DOI: 10.1109/tnnls.2022.3184730
|View full text |Cite
|
Sign up to set email alerts
|

Stage-Wise Magnitude-Based Pruning for Recurrent Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…These networks capture temporal patterns in data, aiding in forecasts and financial analysis. RNNs face challenges like the vanishing gradient problem, addressed by advances in LSTM and GRU technologies, enhancing their capability to remember information over extended periods [34]. Despite computational demands, RNNs' unique architecture makes them invaluable for economic research and policymaking, offering insights into temporal economic dynamics.…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…These networks capture temporal patterns in data, aiding in forecasts and financial analysis. RNNs face challenges like the vanishing gradient problem, addressed by advances in LSTM and GRU technologies, enhancing their capability to remember information over extended periods [34]. Despite computational demands, RNNs' unique architecture makes them invaluable for economic research and policymaking, offering insights into temporal economic dynamics.…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…The implicit assumption is that, for the SAERL framework, the fitness of candidate solutions are only affected by a few dimensions instead of all. This assumption is technically reasonable for the DNN-based policy since DNN often preserves redundent weights and can be effectively prunned [10,7,8].…”
Section: Pe Module : Random Embeddingmentioning
confidence: 99%
“…We vary the parameter value over a wide range and reevaluate to perform parameter sensitivity analysis. More specifically, we choose the number of candidate policies involved in pre-selection in the surrogate model as the hyperparameter, and set it to four different values as [3,5,10,100]. We also select three games (i.e., BeamRider, Bowling, Freeway) for sensitivity analysis.…”
Section: Sensitivity Analysismentioning
confidence: 99%