2021
DOI: 10.1109/tip.2021.3118978
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Domain Adaptation: A Graph Embedding Perspective and a Rectified Experimental Protocol

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 20 publications
(19 citation statements)
references
References 40 publications
1
18
0
Order By: Relevance
“…In (Long et al, 2015), a Deep Adaptation Network (DAN) architecture was proposed that uses maximum mean discrepancy (MMD) (Gretton et al, 2012) to find a domain-invariant feature space. In (Hedegaard et al, 2021), it was shown that supervised DA can be seen as a two-view Graph Embedding. In (Pratama et al, 2019b), a method was proposed that combines DA techniques and drift handling mechanism to solve the multistream classification problem under multisource streams.…”
Section: Model Re-usabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…In (Long et al, 2015), a Deep Adaptation Network (DAN) architecture was proposed that uses maximum mean discrepancy (MMD) (Gretton et al, 2012) to find a domain-invariant feature space. In (Hedegaard et al, 2021), it was shown that supervised DA can be seen as a two-view Graph Embedding. In (Pratama et al, 2019b), a method was proposed that combines DA techniques and drift handling mechanism to solve the multistream classification problem under multisource streams.…”
Section: Model Re-usabilitymentioning
confidence: 99%
“…There have been different approaches to reduce computational complexity when training deep neural networks, such as designing novel low-complexity network architectures (Kiranyaz et al, 2017;Tran et al, 2019c;Tran & Iosifidis, 2019;Tran et al, 2020;Kiranyaz et al, 2020;Heidari & Iosifidis, 2020), replacing existing ones with their low-rank counterparts (Denton et al, 2014;Jaderberg et al, 2014;Tran et al, 2018;Huang & Yu, 2018;Ruan et al, 2020), or adapting the pre-trained models to new tasks, i.e., performing Transfer Learning (TL) (Shao et al, 2014;Yang et al, 2015;Ding et al, 2016;Ding & Fu, 2018;Fons et al, 2020) or Domain Adaptation (DA) learning (Duan et al, 2012;Wang et al, 2019;Zhao et al, 2020;Hedegaard et al, 2021). Among these approaches, model adaptation is the most versatile since a method in this category is often architecture-agnostic, being complementary to other approaches.…”
Section: Introductionmentioning
confidence: 99%
“…Next, a few labeled target observations are used as reference points to adjust similarity structures among label categories. Finally, we refer to the work by Hedegaard et al [47] for a discussion and critic of the generic test setup used in the supervised domain adaptation literature and a proposal of a fair evaluation protocol.…”
Section: Neural Network Deep Learning and Transfer Learningmentioning
confidence: 99%
“…In a semi-supervised DA (SSDA) scheme [ 8 ], both a small amount of labeled and a considerable amount of unlabelled data are accessible. Alternatively, supervised domain adaptation (SDA) [ 9 , 10 ] supposes that all available target samples are annotated, although the number is small. Sophisticated SDA approaches can usually outperform UDA and SSDA ones when the amount of available data in is small [ 6 ].…”
Section: Introductionmentioning
confidence: 99%