2022
DOI: 10.48550/arxiv.2206.05266
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?

Abstract: We investigate whether self-supervised learning (SSL) can improve online reinforcement learning (RL) from pixels. We extend the contrastive reinforcement learning framework (e.g., CURL) that jointly optimizes SSL and RL losses and conduct an extensive amount of experiments with various self-supervised losses. Our observations suggest that the existing SSL framework for RL fails to bring meaningful improvement over the baselines only taking advantage of image augmentation when the same amount of data and augmen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
0
0
Order By: Relevance
“…Curl [44] adds a contrastive loss using a siamese network with a momentum encoder. Another work studies different joint-learning frameworks using different self-supervised methods [34]. SPR [42] uses an auxiliary task that consists in training the encoder followed by an RNN to predict the encoder representation k steps into the future.…”
Section: Related Workmentioning
confidence: 99%
“…Curl [44] adds a contrastive loss using a siamese network with a momentum encoder. Another work studies different joint-learning frameworks using different self-supervised methods [34]. SPR [42] uses an auxiliary task that consists in training the encoder followed by an RNN to predict the encoder representation k steps into the future.…”
Section: Related Workmentioning
confidence: 99%