2024
DOI: 10.1109/tnnls.2022.3201623
|View full text |Cite
|
Sign up to set email alerts
|

Balancing Transferability and Discriminability for Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 33 publications
0
7
0
Order By: Relevance
“…T σ(p v ) denotes the similarity of the i-th sample and the v-th sample with the soft-max operation on their model outputs p i , p v , γ iv is a threshold function used to determine whether the two samples belong to one class. The last class approaches preserves a memory bank to store the outputs of the historical models, thus constructing positive and negative sample pairs [122], [123]. HCID [122] uses the current model to encode the samples of the current batch as the query q t = M t (x q ), and then uses the historical model to encode the previously stored samples as keys…”
Section: Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…T σ(p v ) denotes the similarity of the i-th sample and the v-th sample with the soft-max operation on their model outputs p i , p v , γ iv is a threshold function used to determine whether the two samples belong to one class. The last class approaches preserves a memory bank to store the outputs of the historical models, thus constructing positive and negative sample pairs [122], [123]. HCID [122] uses the current model to encode the samples of the current batch as the query q t = M t (x q ), and then uses the historical model to encode the previously stored samples as keys…”
Section: Contrastive Learningmentioning
confidence: 99%
“…The last class approaches preserves a memory bank to store the outputs of the historical models, thus constructing positive and negative sample pairs [122], [123]. HCID [122] uses the current model to encode the samples of the current batch as the query q t = M t (x q ), and then uses the historical model to encode the previously stored samples as keys…”
Section: Contrastive Learningmentioning
confidence: 99%
“…To alleviate the dependence on source domain data, which may be inaccessible due to privacy issues or storage overhead, sourcefree domain adaptation (SFDA) emerges which handles DA on target data without access to source data [12], [13], [14], [15], [16]. SFDA is often achieved through self-training [12], [17], selfsupervised learning [16], [18] or introducing prior knowledge [12] • X. Xu is with I2R, A-STAR. Y. Su, and K. Jia are with the School of Electronic and Information Engineering, South China University of Technology.…”
Section: Introductionmentioning
confidence: 99%
“…The second category is the image translation loss that generates source data with target-like styles and appearance via GANs [7,239,253] and spectrum matching [14,185]. The third category is the self-training loss that re-trains the network iteratively with pseudo-labeled target samples [7,13,14,141,183,184,254,255].…”
Section: Related Workmentioning
confidence: 99%
“…This section presents the proposed historical contrastive learning [254] 1 that memorizes the source hypothesis to make up for the absence of source data as illustrated in Fig. 5.2.…”
Section: Historical Contrastive Learningmentioning
confidence: 99%