2022
DOI: 10.1155/2022/1582624
|View full text |Cite
|
Sign up to set email alerts
|

TSTELM: Two-Stage Transfer Extreme Learning Machine for Unsupervised Domain Adaptation

Abstract: As a single-layer feedforward network (SLFN), extreme learning machine (ELM) has been successfully applied for classification and regression in machine learning due to its faster training speed and better generalization. However, it will perform poorly for domain adaptation in which the distributions between training data and testing data are inconsistent. In this article, we propose a novel ELM called two-stage transfer extreme learning machine (TSTELM) to solve this problem. At the statistical matching stage… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 58 publications
(80 reference statements)
0
2
0
Order By: Relevance
“…To address this issue, some improvements have been made to ELM. Zang et al [20] proposed a two-stage transfer limit learning machine (TSTELM) framework. The framework uses MMD to reduce the distribution differences of output layers between domains in the statistical matching stage.…”
Section: Introductionmentioning
confidence: 99%
“…To address this issue, some improvements have been made to ELM. Zang et al [20] proposed a two-stage transfer limit learning machine (TSTELM) framework. The framework uses MMD to reduce the distribution differences of output layers between domains in the statistical matching stage.…”
Section: Introductionmentioning
confidence: 99%
“…Chen et al [41] presented an transfer ELM, in which output weight alignment was applied to reduce domain bias and L2, 1-norm was imposed on output weight to enhance feature selection. For minimizing the distribution discrepancy between source and target domains, Li et al [42], Chen et al [43], and Zang et al [44] utilized Maximum Mean Discrepancy (MMD) [45] to promote knowledge transfer in their respective models. Due to the insufficient target sample labels, the performance of unsupervised models is usually lower than that of supervised models, but it is hard to collect labeled target samples.…”
Section: Introductionmentioning
confidence: 99%