2015
DOI: 10.1016/j.neucom.2013.11.045
|View full text |Cite
|
Sign up to set email alerts
|

Parametric nonlinear dimensionality reduction using kernel t-SNE

Abstract: Novel non-parametric dimensionality reduction techniques such as t-distributed stochastic neighbor embedding (t-SNE) lead to a powerful and flexible visualization of high-dimensional data. One drawback of non-parametric techniques is their lack of an explicit out-of-sample extension. In this contribution, we propose an efficient extension of t-SNE to a parametric framework, kernel t-SNE, which preserves the flexibility of basic t-SNE, but enables explicit out-of-sample extensions. We test the ability of kernel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
117
0
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 210 publications
(128 citation statements)
references
References 22 publications
1
117
0
1
Order By: Relevance
“…The MNIST dataset 5 consists of 60,000 training and 10,000 test gray-level 784-dimensional images. The Fashion dataset 6 has the same number of classes, training and test data points as that of MNIST, but is designed to classify 10 fashion products, such as boot, coat, and bag, where each contains a set of pictures taken by professional photographers from different aspects of the product, such as looks from front, back, with model, and in an outfit.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The MNIST dataset 5 consists of 60,000 training and 10,000 test gray-level 784-dimensional images. The Fashion dataset 6 has the same number of classes, training and test data points as that of MNIST, but is designed to classify 10 fashion products, such as boot, coat, and bag, where each contains a set of pictures taken by professional photographers from different aspects of the product, such as looks from front, back, with model, and in an outfit.…”
Section: Methodsmentioning
confidence: 99%
“…This small modification has significant benefits. Because z << n, compared to the quadratic computational complexity with respect to n of Equation 6, the objective function in Equation 12 has a significantly reduced computational cost, considering that the number of representative exemplars is often much much smaller than n for real-world large datasets in practice.…”
Section: Parametric T-distributed Stochastic Exemplar-centered Embeddingmentioning
confidence: 99%
See 1 more Smart Citation
“…On the contrary, supervised approaches guarantee that the information which is used to represent the data is relevant for the task, suppressing irrelevant aspects and noise. This is particularly relevant if only few data points are available since the problem would be ill-defined without shaping it according to such auxiliary knowledge [19].…”
Section: Representation Learningmentioning
confidence: 99%
“…It was found that mismatching the neighbourhood distributions allowed for better local clustering, with examples given in the supplementary material of [29]. Further research into t-SNE has allowed for feed-forward mappings using both kernel methods [5,13] and deep belief nets [27]. In addition to this, the impact of different measures of 'closeness' of the observed and visualized neighbourhood distributions is discussed in [6,9].…”
Section: Introductionmentioning
confidence: 99%