2020
DOI: 10.48550/arxiv.2003.04298
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Compositions of Transformations in Contrastive Self-Supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Recent works (Li et al, 2021b;a) apply contrastive learning over trajectories for compact representations, but ignore the influence of behavior policy mismatch. (Chen et al, 2020;He et al, 2020;Patrick et al, 2020), multimodal learning (Sermanet et al, 2018;Tian et al, 2020;Liu et al, 2020b) and image-based reinforcement learning (Anand et al, 2019;Stooke et al, 2021;Laskin et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent works (Li et al, 2021b;a) apply contrastive learning over trajectories for compact representations, but ignore the influence of behavior policy mismatch. (Chen et al, 2020;He et al, 2020;Patrick et al, 2020), multimodal learning (Sermanet et al, 2018;Tian et al, 2020;Liu et al, 2020b) and image-based reinforcement learning (Anand et al, 2019;Stooke et al, 2021;Laskin et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Diversity: According to prior works (Patrick et al, 2020;Chen et al, 2020), the performance of contrastive learning depends on the diversity of negative pairs. More diversity increases the difficulty in optimizing InfoNCE and helps the encoder to learn meaningful representations.…”
Section: Negative Pairs Generationmentioning
confidence: 99%