The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2013
DOI: 10.1016/j.neucom.2012.12.036
|View full text |Cite
|
Sign up to set email alerts
|

Type 1 and 2 mixtures of Kullback–Leibler divergences as cost functions in dimensionality reduction based on similarity preservation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 79 publications
(66 citation statements)
references
References 29 publications
0
62
0
1
Order By: Relevance
“…Further options are to let the analyst determine interesting features in combination with subspace clustering (e.g., [41]) or quality metrics (e.g., [31]). S4 Feature Selection & Emphasis was the most frequently implemented interaction scenario (37). S5 DR Parameter Tuning: Some DR algorithms contain specific parameters that can be tuned, such as LDA regularization in [13].…”
Section: S4 Feature Selection and Emphasismentioning
confidence: 99%
See 2 more Smart Citations
“…Further options are to let the analyst determine interesting features in combination with subspace clustering (e.g., [41]) or quality metrics (e.g., [31]). S4 Feature Selection & Emphasis was the most frequently implemented interaction scenario (37). S5 DR Parameter Tuning: Some DR algorithms contain specific parameters that can be tuned, such as LDA regularization in [13].…”
Section: S4 Feature Selection and Emphasismentioning
confidence: 99%
“…Observations: The final result of our coding process is shown in Table 1 and Figure 4. To provide an overview of the coded results, we created a 2D projection of the papers using Multiscale Jensen-Shannon Embedding [37], which aims to place papers with similar codes nearby in the projection. Together with Table 1 we can investigate combinations of interaction scenarios.…”
Section: S6 Defining Constraintsmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, the range is Q. NX The last one, R NX (K) [60], can be considered a renormalized Q^, allowing us to compare values at different scales. R NX (K) is based on Q_ k with a baseline subtraction and a normalization: it indicates the relative improvement in a random embedding.…”
Section: T)e E «U)-fc) (6)mentioning
confidence: 99%
“…Genuine similarity preservation, with similarities in both HD and LD spaces, appeared later with stochastic neighbour embedding [12] (SNE). Interest in this new paradigm grew after the publication of variants such as t-distributed SNE (t-SNE) [9], neighbourhood retrieval and visualisation (NeRV) [13], and Jensen-Shannon embedding (JSE) [14]. These methods significantly outperformed older ones in terms of DR quality, especially when it comes to render accurately small-size neighbourhoods around each datum.…”
Section: Introductionmentioning
confidence: 99%