2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00948
|View full text |Cite
|
Sign up to set email alerts
|

SelfReg: Self-supervised Contrastive Regularization for Domain Generalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 128 publications
(56 citation statements)
references
References 18 publications
0
48
0
Order By: Relevance
“…and to the baseline trained directly on RGB natural images. SelfReg [46] performs poorly as expected; it is intended for multi-source domain generalization. For L2D [51] which is designed specifically for the single-source task, we run the provided code and ensure that optimal learning rate according to validation performance on the source task is used; the reported score is reproduced.…”
Section: A Ablation Study On Sketchymentioning
confidence: 55%
See 1 more Smart Citation
“…and to the baseline trained directly on RGB natural images. SelfReg [46] performs poorly as expected; it is intended for multi-source domain generalization. For L2D [51] which is designed specifically for the single-source task, we run the provided code and ensure that optimal learning rate according to validation performance on the source task is used; the reported score is reproduced.…”
Section: A Ablation Study On Sketchymentioning
confidence: 55%
“…c) Domain generalization: The most common approach for domain generalization is invariant feature learning, based on the theoretical results of Ben-David et al [42]. Representative approaches include kernel-based invariant feature learning by minimizing domain dissimilarity [43], multi-task autoencoders that transform the original image to other related domains, domain classifiers as adversaries to match the source domain distributions in the feature space [44], [45], and crossdomain non-contrastive learning as regularization [46]. Some methods specialize for single-source domain generalization.…”
Section: Introductionmentioning
confidence: 99%
“…To alleviate this challenge, Pandey et al [48] used a metric transformation to keep the source samples clustering near the corresponding categories in the feature space. Kim et al [49] used a self-supervised contrastive loss to make the representations of the positive pair samples close. However, these interesting works are designed for the task of domain generalization, in which the target domain is unaccessible during training, thus, they rarely consider transferring discriminative information for the target domain.…”
Section: B Pseudo Target Domain: Construction and Alignmentmentioning
confidence: 99%
“…Domain Generalization (DG) task also tackles the OOD generalization problem, but requires sufficient domain samples and full ImageNet pretraining. In this paper, we select three SOTA DG approaches (SD [70], SelfReg [50] and SagNet [63]) for comparison. These methods do not require domain labels which share the same setting as ours.…”
Section: Implementation Detailsmentioning
confidence: 99%