2021
DOI: 10.48550/arxiv.2107.05446
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration

Abstract: Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…For example, since the development of a neural approach by Gatys et al [17], many methods have been designed that apply transformations to hidden units of convolutional networks to stylise images [12,27,28]. Style transfer is also closely related to the wider task of domain adaptation [4,13,14,25,45,52].…”
Section: Style Transfer and Modellingmentioning
confidence: 99%
“…For example, since the development of a neural approach by Gatys et al [17], many methods have been designed that apply transformations to hidden units of convolutional networks to stylise images [12,27,28]. Style transfer is also closely related to the wider task of domain adaptation [4,13,14,25,45,52].…”
Section: Style Transfer and Modellingmentioning
confidence: 99%
“…SHOT [40] employs an information maximization loss along with a self-supervised pseudo-labeling, and is extended to the multi-source scenario via source model weighting [1]. BUFR [13] aligns the target domain feature distribution with the one from the source domain. Another line of works leverage Batch Normalization (BN) [28] layers by replacing the BN-statistics computed on the source domain with those computed on the target domain [38], or by training the BN-parameters on the target domain via entropy minimization [74].…”
Section: Source-free Domain Adaptationmentioning
confidence: 99%
“…While these approaches rely on the availability of data from a known target domain, we address the DFDG scenario where the model is expected to generalize to a priori unknown target domain(s) without any modification or exposure to their data. We also note that some methods [31,40,13] modify the training procedure on the source domain, which would not be possible in cases where the data is not accessible anymore.…”
Section: Source-free Domain Adaptationmentioning
confidence: 99%
See 1 more Smart Citation