2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9206877
|View full text |Cite
|
Sign up to set email alerts
|

Statistical and Geometrical Alignment using Metric Learning in Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…To demonstrate the efficiency of our VTL, the results of our experiments are compared with several unsupervised domain adaptation methods including EMFS (2018) [40], EasyTL (2019) [41], STJML (2020) [42], GEF (2019) [43], DWDA (2021) [44], CDMA (2020) [45], ALML (2022) [46], TTLC (2021) [33], SGA-MDAP (2020) [47], NSO (2020) [48], FSUTL (2020) [49], PLC (2021) [50], GSI (2021) [51] and ICDAV (2022) [52]. In the experiments, VTL begins with learning a domain invariant and class discriminative latent feature space according to Equation (18).…”
Section: Resultsmentioning
confidence: 99%
“…To demonstrate the efficiency of our VTL, the results of our experiments are compared with several unsupervised domain adaptation methods including EMFS (2018) [40], EasyTL (2019) [41], STJML (2020) [42], GEF (2019) [43], DWDA (2021) [44], CDMA (2020) [45], ALML (2022) [46], TTLC (2021) [33], SGA-MDAP (2020) [47], NSO (2020) [48], FSUTL (2020) [49], PLC (2021) [50], GSI (2021) [51] and ICDAV (2022) [52]. In the experiments, VTL begins with learning a domain invariant and class discriminative latent feature space according to Equation (18).…”
Section: Resultsmentioning
confidence: 99%
“…Before being processed by SVM, the data used will be converted into a features vector using statistical calculations [31]. www.ijacsa.thesai.org The features selection method is one of the strategies to form a new features representation besides features mapping, features clustering, features encoding, features alignment, and features augmentation [32]. This strategy can preserve the local and important structure of the domain, besides, also reducing the distribution difference of the cross-domain.…”
Section: Machinementioning
confidence: 99%
“…In the proposed featuretransfer learning, MMD is used in conjunction with the kernel which will map the original value of the features space of each domain to a new features representation using a mapping function. Our proposed method uses a distance function as the mapping function, that is the Euclidean Distance and Reproducing Kernel Hilbert Space (RKHS) [13] [14] [15] [16] [18] [32]. The use of MMD and kernel makes it sufficient to calculate the similarity of the cross-domain using density estimation.…”
Section: Machinementioning
confidence: 99%
See 1 more Smart Citation