2019
DOI: 10.1016/j.imavis.2019.08.006
|View full text |Cite
|
Sign up to set email alerts
|

A novel unsupervised Globality-Locality Preserving Projections in transfer learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…As an extension of TCA, Joint Distribution Adaptation (JDA) [14] aligned conditional and marginal distribution simultaneously using pseudo labels, then an iterative training scheme is employed to obtain more accurate labels. Inspired by this, a series of studies tried to modify classical feature extraction methods (e.g., linear discriminant analysis and locality preserving projections [15,16]) by means of pseudo labels.…”
Section: Pseudo Label Without Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…As an extension of TCA, Joint Distribution Adaptation (JDA) [14] aligned conditional and marginal distribution simultaneously using pseudo labels, then an iterative training scheme is employed to obtain more accurate labels. Inspired by this, a series of studies tried to modify classical feature extraction methods (e.g., linear discriminant analysis and locality preserving projections [15,16]) by means of pseudo labels.…”
Section: Pseudo Label Without Selectionmentioning
confidence: 99%
“…Performance (accuracy %) on Office-Caltech10 (No [1][2][3][4][5][6][7][8][9][10][11][12]. and ImageCLEF (No [13][14][15][16][17][18]…”
mentioning
confidence: 99%
“…Moreover, it was impossible to collect enough meaningful data required to train a model, although many researchers attempted to gather them. However, various transfer learning methods have been proposed for transferring knowledge in the context of features, instant weights, parameters, or relationship information between data samples in a domain [13][14][15][16]. Figure 1 shows four steps of creating a complete model using transfer learning.…”
Section: Transfer Learningmentioning
confidence: 99%