2019
DOI: 10.1016/j.knosys.2019.104975
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised domain adaptation: A multi-task learning-based method

Abstract: This paper presents a novel multi-task learningbased method for unsupervised domain adaptation. Specifically, the source and target domain classifiers are jointly learned by considering the geometry of target domain and the divergence between the source and target domains based on the concept of multi-task learning. Two novel algorithms are proposed upon the method using Regularized Least Squares and Support Vector Machines respectively. Experiments on both synthetic and real world cross domain recognition tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 20 publications
0
17
0
Order By: Relevance
“…The trend is similar to the results listed in Tables 2 and 3. mtUDA [47] outperforms ADDA [3] with 0.4% higher. The partial transfer methods like SAN [7], WAN [62] and our MOAN are designed with special GAN-based structures to evaluate the samples and features, therefore showing obvious advantages in processing these partial transfer problems.…”
Section: Statistical Results and Analysesmentioning
confidence: 95%
See 3 more Smart Citations
“…The trend is similar to the results listed in Tables 2 and 3. mtUDA [47] outperforms ADDA [3] with 0.4% higher. The partial transfer methods like SAN [7], WAN [62] and our MOAN are designed with special GAN-based structures to evaluate the samples and features, therefore showing obvious advantages in processing these partial transfer problems.…”
Section: Statistical Results and Analysesmentioning
confidence: 95%
“…Recent adversarial learning-based methods (like ADDA [3]) have higher accuracies than classical methods. The multi-task method, mtUDA [47], performs better than ADDA [3] on both 2 tasks. Those methods proposed for partial transfer learning, SAN and WAN [62], outperform other existing methods.…”
Section: Statistical Results and Analysesmentioning
confidence: 95%
See 2 more Smart Citations
“…While the drawbacks are the computational cost and the local optimum. To date, these two approaches have produced similar performance on some datasets [116,152,162,274] though the end-to-end deep systems involve more parameters and require more computational costs. One of the missing study in the literature is a systematic study and comparison of the two approaches under same or similar conditions.…”
Section: Deep Transfer Learningmentioning
confidence: 99%