2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00813
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Video Representation Learning with Meta-Contrastive Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(5 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…While our work focused on supervised learning with visual data, our method and general approach is applicable to other supervised learning tasks. Indeed, the MAML framework has been established for tasks such as reinforcement learning (46), multilingual speech emotion classiőcation (47), and self-supervised learning (48). In this work, SOEL was limited to the őnal layer of the network.…”
Section: Discussionmentioning
confidence: 99%
“…While our work focused on supervised learning with visual data, our method and general approach is applicable to other supervised learning tasks. Indeed, the MAML framework has been established for tasks such as reinforcement learning (46), multilingual speech emotion classiőcation (47), and self-supervised learning (48). In this work, SOEL was limited to the őnal layer of the network.…”
Section: Discussionmentioning
confidence: 99%
“…object recognition [29], video representation learning [54]. There are also some prior works in continual learning.…”
Section: Related Workmentioning
confidence: 99%
“…Conventional KD and SSL methods offer numerous potential teacher choices, e.g., larger but static pretrained teacher networks [27], or networks that share the same model architecture but use weights from a previous epoch [42], or as an exponential moving average [24]. The primary drawback to these approaches is reduced computational and memory efficiency as they require a secondary inference stage using additional model weights that must be kept in memory.…”
Section: Teacher Network Selectionmentioning
confidence: 99%