2018
DOI: 10.1109/tip.2018.2819503
|View full text |Cite
|
Sign up to set email alerts
|

An Embarrassingly Simple Approach to Visual Domain Adaptation

Abstract: We show that it is possible to achieve high-quality domain adaptation without explicit adaptation. The nature of the classification problem means that when samples from the same class in different domains are sufficiently close, and samples from differing classes are separated by large enough margins, there is a high probability that each will be classified correctly. Inspired by this, we propose an embarrassingly simple yet effective approach to domain adaptation-only the class mean is used to learn class-spe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 97 publications
(35 citation statements)
references
References 35 publications
0
35
0
Order By: Relevance
“…Models learned on the training set may not generalize well to the testing set with significant variations in plant cultivars, illumination changes, and poses. In this case, the idea of domain adaptation may be applied to fill the performance loss (Lu et al, 2017b(Lu et al, , 2018. All evaluation results above suggest the general applicability of TasselNetV2+ in plant counting, especially when only the count value is the output of interest.…”
Section: Further Discussionmentioning
confidence: 89%
“…Models learned on the training set may not generalize well to the testing set with significant variations in plant cultivars, illumination changes, and poses. In this case, the idea of domain adaptation may be applied to fill the performance loss (Lu et al, 2017b(Lu et al, , 2018. All evaluation results above suggest the general applicability of TasselNetV2+ in plant counting, especially when only the count value is the output of interest.…”
Section: Further Discussionmentioning
confidence: 89%
“… Domain Irrelevant Class clustering (DICE) [ 38 ]: This method specifically deals with the intra-domain structure for the target domain in addition to other common properties. Linear Discriminant Analysis-inspired Domain Adaptation (LDADA) [ 39 ]: The key insight of this approach is to leverage the discriminative information from the target task, even when the target domain labels are not given. Kernelized Unified Framework for Domain Adaptation (KUFDA) [ 16 ]: This TL method improves the JGSA method by adding the Laplacian regularization term.…”
Section: Methodsmentioning
confidence: 99%
“…Linear Discriminant Analysis-inspired Domain Adaptation (LDADA) [39]: The key insight of this approach is to leverage the discriminative information from the target task, even when the target domain labels are not given.…”
mentioning
confidence: 99%
“…The performance of the proposed DASGA method is compared to the domain adaptation methods Heterogeneous Domain Adaptation using Manifold Alignment (DAMA) [32], Easy Adapt++ (EA++) [23], Subspace Alignment (SA) [2], Geodesic Flow Kernel for Unsupervised Domain Adaptation (GFK) [3], Scatter Component Analysis (SCA) [40], LDA-Inspired Domain Adaptation (LDADA) [46], Joint Geometrical and Statistical Alignment (JGSA) [31]; as well as the baseline classifiers Support Vector Machine (SVM), Nearest-Neighbor classification (NN), and the graph-based Semi-Supervised Learning with Gaussian fields (SSL) algorithm [13]. The baseline classifiers SVM and NN are eval- uated under the "source+target" setting using the labeled samples from both the source and the target domains for training, and the SSL algorithm is used in the "target only" setting, which give the best results.…”
Section: Evaluation Of the Algorithm Performancementioning
confidence: 99%
“…The performance gap between DASGA and the other methods is larger in Synthetic dataset-2, which is a more challenging data set due to the relatively high distribution variance. Among the domain adaptation methods, DAMA [32] and LDADA [46] give the closest performance to the proposed DASGA method. The approach in both of these methods is to learn supervised projections, which is relatively successful in this synthetic data set consisting of normally distributed data.…”
Section: Experiments On Synthetic Data Setsmentioning
confidence: 99%