2022
DOI: 10.48550/arxiv.2207.12065
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dynamic Channel Selection in Self-Supervised Learning

Abstract: Whilst computer vision models built using self-supervised approaches are now commonplace, some important questions remain. Do self-supervised models learn highly redundant channel features? What if a self-supervised network could dynamically select the important channels and get rid of the unnecessary ones? Currently, convnets pre-trained with self-supervision have obtained comparable performance on downstream tasks in comparison to their supervised counterparts in computer vision. However, there are drawbacks… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 20 publications
(35 reference statements)
0
1
0
Order By: Relevance
“…Herrmann et al [5] focused on channel selection with Gumbel-Softmax, which differs from the layer-level methods in [4], [25]. Krishna et al [26] extended channel selection techniques to self-supervised learning. Wojcik et al [10] applied a routing system to visual transformers.…”
Section: Related Work a Conditional Computingmentioning
confidence: 99%
“…Herrmann et al [5] focused on channel selection with Gumbel-Softmax, which differs from the layer-level methods in [4], [25]. Krishna et al [26] extended channel selection techniques to self-supervised learning. Wojcik et al [10] applied a routing system to visual transformers.…”
Section: Related Work a Conditional Computingmentioning
confidence: 99%