The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
24th Irish Machine Vision and Image Processing Conference 2022
DOI: 10.56541/lkli8696
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Channel Selection in Self-Supervised Learning

Abstract: Whilst computer vision models built using self-supervised approaches are now commonplace, some important questions remain. Do self-supervised models learn highly redundant channel features? What if a self-supervised network could dynamically select the important channels and get rid of the unnecessary ones? Currently, convnets pre-trained with self-supervision have obtained comparable performance on downstream tasks in comparison to their supervised counterparts in computer vision. However, there are drawbacks… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…To make training feasible the Gumbel-softmax trick (Jang et al, 2016) is adopted. The Gumbel-Trick has been widely used as a reparameterisation technique for the task of dynamic channel selection (Krishna et al, 2022;Li et al, 2021;Herrmann et al, 2020;Veit & Belongie, 2018). For more clarity refer Figure 4 in Appendix A.3.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…To make training feasible the Gumbel-softmax trick (Jang et al, 2016) is adopted. The Gumbel-Trick has been widely used as a reparameterisation technique for the task of dynamic channel selection (Krishna et al, 2022;Li et al, 2021;Herrmann et al, 2020;Veit & Belongie, 2018). For more clarity refer Figure 4 in Appendix A.3.…”
Section: Methodsmentioning
confidence: 99%
“…Most of the works on dynamic computation have been mostly confined to supervised learning. Recently, (Krishna et al, 2022) used SimSiam (Chen & He) as a selfsupervised objective combined with a dynamic channel gating (DGNet) (Li et al, 2021) mechanism trained from scratch, and showed that comparable performance can be achieved under channel budget constraints. Likewise (Meng et al, 2022) used a channel gating-based dynamic pruning (CGNet) (Hua et al, 2019) augmented with contrastive learning to achieve inference speed-ups without substantial loss of performance.…”
Section: Self-supervised Dynamic Computation and Beyondmentioning
confidence: 99%
See 3 more Smart Citations