2019
DOI: 10.1109/tnnls.2018.2861891
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear Dimensionality Reduction With Missing Data Using Parametric Multiple Imputations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 57 publications
0
17
0
Order By: Relevance
“…This feature hinders the ability to analyze whether the neighborhoods are reproduced at different data scales and does not highlight the local and global properties of the mapping. For these reasons, some studies developed dimensionality reduction (DR) quality criteria which measure the high-dimensional neighborhood preservation in the projection [43], becoming generally adopted in several publications [44][45][46]. This neighborhood preservation principle is indeed considered as the driving factor in the DR quality [47].…”
Section: Neighborhood Preservation Assessmentmentioning
confidence: 99%
“…This feature hinders the ability to analyze whether the neighborhoods are reproduced at different data scales and does not highlight the local and global properties of the mapping. For these reasons, some studies developed dimensionality reduction (DR) quality criteria which measure the high-dimensional neighborhood preservation in the projection [43], becoming generally adopted in several publications [44][45][46]. This neighborhood preservation principle is indeed considered as the driving factor in the DR quality [47].…”
Section: Neighborhood Preservation Assessmentmentioning
confidence: 99%
“…It is common for existing kernel-based methods to reveal the underlying structure of data by calculating the pairwise similarity between samples. However, in many successful machine learning algorithms, such as dimensionality reduction [34], [35], clustering [16], [36], and recent feature selection algorithms [32], [37], [38], researchers find that it is beneficial to preserve only the reliable local geometry as a representation of the data structure. There are two main underlying reasons.…”
Section: A Construction Of the Neighbor Kernelmentioning
confidence: 99%
“…B. Multiple Imputations: [7]Recently many embedding Dimensionality Reduction methods were developed to arrest the 'Curse of Dimensionality', but these methods cannot directly employ on data sets that are incomplete. The limitations are addressed by developing general methods for non linear dimensionality reduction with missing data.…”
Section: Dimensionality Reduction Techniques Using Deep Learningmentioning
confidence: 99%