2019
DOI: 10.1007/978-3-030-10925-7_29
|View full text |Cite
|
Sign up to set email alerts
|

Parametric t-Distributed Stochastic Exemplar-Centered Embedding

Abstract: Parametric embedding methods such as parametric t-distributed Stochastic Neighbor Embedding (pt-SNE) enables out-of-sample data visualization without further computationally expensive optimization or approximation. However, pt-SNE favors small mini-batches to train a deep neural network but large minibatches to approximate its cost function involving all pairwise data point comparisons, and thus has difficulty in finding a balance. To resolve the conflicts, we present parametric t-distributed stochastic exempl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Although t-SNE is non-parametric, there is a parametric t-SNE version [76], which seeks to overcome this limitation. Nonetheless, it is often difficult to find an optimal configuration of hyperparameters for these models, which in turn yields noisy projections compared to those obtained using the non-parametric version of the algorithm [44,87,93].…”
Section: Visual Exploration Of Multidimensional Datamentioning
confidence: 99%
“…Although t-SNE is non-parametric, there is a parametric t-SNE version [76], which seeks to overcome this limitation. Nonetheless, it is often difficult to find an optimal configuration of hyperparameters for these models, which in turn yields noisy projections compared to those obtained using the non-parametric version of the algorithm [44,87,93].…”
Section: Visual Exploration Of Multidimensional Datamentioning
confidence: 99%
“…To obtain consistency in the item positioning, several works [12], [13] proposed the use of deep neural networks to mimic the behavior of t-SNE. As with any trained algorithm with no memory nor update mechanism, the inference results is purely deterministic.…”
Section: Introductionmentioning
confidence: 99%