2020
DOI: 10.1109/access.2020.3038460
|View full text |Cite
|
Sign up to set email alerts
|

Single- and Multi-Distribution Dimensionality Reduction Approaches for a Better Data Structure Capturing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…Furthermore, a permutation test was conducted to determine the significance of the correlation between MDS coordinates and valence ratings that were iterated 100,000 times. When we apply distance-based methods to convert high-dimensional data into lower dimensions, the resulting distribution deviates from the usual pattern seen in traditional linear models (Hajderanj et al, 2020). In this context, we established the threshold for the MDS-valence rating correlation coefficient at the top 5% (α = 0.05) to assess statistical significance.…”
Section: Multidimensional Scaling (Mds)mentioning
confidence: 99%
“…Furthermore, a permutation test was conducted to determine the significance of the correlation between MDS coordinates and valence ratings that were iterated 100,000 times. When we apply distance-based methods to convert high-dimensional data into lower dimensions, the resulting distribution deviates from the usual pattern seen in traditional linear models (Hajderanj et al, 2020). In this context, we established the threshold for the MDS-valence rating correlation coefficient at the top 5% (α = 0.05) to assess statistical significance.…”
Section: Multidimensional Scaling (Mds)mentioning
confidence: 99%
“…After using t-SNE to process the rice samples, the five clustering characteristics are apparent, and the five clusters of data points are farther apart; especially, the Thai jasmine rice and Sichuan Meishan rice clustering effect is the best, and the remaining three varieties of the clusters only have a small amount of crossover among the clusters. This is because the t-SNE dimensionality reduction method can replace the Gaussian distribution in the low-dimensional space with a t-distribution, and the long-tailed nature of the t-distribution [29] (low in the center and high and long in the tails [29]) separates the sample points of the five varieties of rice more obviously. Yan Hu et al [20] used t-SNE and PCA to construct a tea variety classification model through three-dimensional visualization.…”
Section: Feature Downscaling and Selectionmentioning
confidence: 99%
“…T-SNE is a popular learning algorithm based on the stochastic nearest neighbor embedding (SNE) algorithm for visualizing high-dimensional datasets by representing them in a low-dimensional space of 2 or 3 dimensions [29]. t-SNE is a distribution of t-transformed values for individual samples, not for the overall samples, and it is an estimation of the distribution of the standard normal distribution of the values of the u-transformed variables [29].…”
Section: Feature Downscaling and Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…In general, linear manifold learning methods aim to maintain the global structure of the data [3] (far away (close) high dimensional space data samples to be located far away (close) in a low dimensional representation). Conversely, nonlinear manifold learning methods seek to preserve the local structure of data [3]; however, the maintained data structure of the above-mentioned methods depends on the number of neighbours considered [17]. Subsequently, tuning the number of neighbours has a crucial impact on the data structure maintained.…”
Section: Introductionmentioning
confidence: 99%