2021
DOI: 10.1109/access.2020.3047448
|View full text |Cite
|
Sign up to set email alerts
|

Extreme Learning Machine Based on Maximum Weighted Mean Discrepancy for Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 32 publications
(36 reference statements)
0
10
0
Order By: Relevance
“…However, these approaches are developed to solve semi-supervised domain adaptation problems because they require few labeled samples from the target domains. Due to its high cost in collecting labels and labeling samples, cross-domain ELM (CDELM) [ 39 ], domain space transfer ELM (DST-ELM) [ 40 ], cross-domain extreme learning machine (CdELM) [ 41 ], and extreme learning machine based on maximum weighted mean discrepancy (ELM-MWMD) [ 42 ] are proposed respectively for unsupervised domains by minimizing the classification loss and applying the maximum mean discrepancy (MMD) strategy on the prediction results. In the above methods, the supervised ELM model usually outperforms the unsupervised ones with the help of a few labeled samples from the target domain.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, these approaches are developed to solve semi-supervised domain adaptation problems because they require few labeled samples from the target domains. Due to its high cost in collecting labels and labeling samples, cross-domain ELM (CDELM) [ 39 ], domain space transfer ELM (DST-ELM) [ 40 ], cross-domain extreme learning machine (CdELM) [ 41 ], and extreme learning machine based on maximum weighted mean discrepancy (ELM-MWMD) [ 42 ] are proposed respectively for unsupervised domains by minimizing the classification loss and applying the maximum mean discrepancy (MMD) strategy on the prediction results. In the above methods, the supervised ELM model usually outperforms the unsupervised ones with the help of a few labeled samples from the target domain.…”
Section: Introductionmentioning
confidence: 99%
“…In this article, inspired by pioneering works [ 38 , 42 ], we propose a novel method denoted as two-stage transfer extreme learning machine (TSTELM), in which there are two stages of domain adaptation: statistical matching and subspace alignment. At the statistical matching stage, we first learn a domain adaptation ELM classifier via utilizing the MMD to simultaneously minimize the marginal and conditional distribution between domains.…”
Section: Introductionmentioning
confidence: 99%
“…In UDA [18]- [25], the source samples and unlabeled target samples are integrated for training. The knowledge from the source domain obtained by the supervision training is transferred to the target domain.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, we can improve the learning performance by aligning their distribution. In fact, domain adaptation [2], [3], [6], [7] can reduce the distribution divergence between source and target domains. It employs labeled source domain to boost the learning task in the target domain with few labels or even without labels.…”
Section: Introductionmentioning
confidence: 99%
“…It should be pointed out that the methods mentioned above [8], [10], [23], [24] ignore a fact that samples after projection may not be discriminative enough for the final classification. And in UDA, the labels of the target domain are not available [25].…”
Section: Introductionmentioning
confidence: 99%