2022
DOI: 10.48550/arxiv.2205.15043
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch

Abstract: Training deep reinforcement learning (DRL) models usually requires high computation costs. Therefore, compressing DRL models possesses immense potential for training acceleration and model deployment. However, existing methods that generate small models mainly adopt the knowledge distillation based approach by iteratively training a dense network, such that the training process still demands massive computing resources. Indeed, sparse training from scratch in DRL has not been well explored and is particularly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…PoPS can effectively prune DNNs, but just as with other models, it lacks the ability to prune training data in the same way. The Rigged Reinforcement Learning Lottery (RLx2) applied ultra-sparse networks to achieve model compression (14). A variety of different mechanisms, such as a dynamic-capacity replay buffer and gradient-guided topology search scheme, work in tandem to achieve such results.…”
Section: Introductionmentioning
confidence: 99%
“…PoPS can effectively prune DNNs, but just as with other models, it lacks the ability to prune training data in the same way. The Rigged Reinforcement Learning Lottery (RLx2) applied ultra-sparse networks to achieve model compression (14). A variety of different mechanisms, such as a dynamic-capacity replay buffer and gradient-guided topology search scheme, work in tandem to achieve such results.…”
Section: Introductionmentioning
confidence: 99%