2017
DOI: 10.1609/aaai.v31i1.10885
|View full text |Cite
|
Sign up to set email alerts
|

Variable Kernel Density Estimation in High-Dimensional Feature Spaces

Abstract: Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high-dimensional feature spaces. We derive a variable kernel bandwidth estimator by minimizing the leave-one-out entropy objective function and show that this estimator is capable of performing estimation in high-dimensional feature spaces with great success. We compare the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…However, this is fundamentally different from statistical inference or variable importance estimation (Barber and Candès, 2015; Louppe et al, 2013; Nguyen et al, 2022). Recent extensions of permutation-based importance hold promise to provide statistical error control and effective variable-importance detection in the context of deep learning models and high-dimensional correlated inputs (Chamma et al, 2024a,b; Mi et al, 2021). The successful application of high-dimensional inference procedures will be an important opportunity for future work with the potential to add statistical rigor to the practical inference provided by the structural design choices of GREEN .…”
Section: Discussionmentioning
confidence: 99%
“…However, this is fundamentally different from statistical inference or variable importance estimation (Barber and Candès, 2015; Louppe et al, 2013; Nguyen et al, 2022). Recent extensions of permutation-based importance hold promise to provide statistical error control and effective variable-importance detection in the context of deep learning models and high-dimensional correlated inputs (Chamma et al, 2024a,b; Mi et al, 2021). The successful application of high-dimensional inference procedures will be an important opportunity for future work with the potential to add statistical rigor to the practical inference provided by the structural design choices of GREEN .…”
Section: Discussionmentioning
confidence: 99%
“…tight cluster, or intra‐cluster similarity). In our scenario, as data are drawn from a probability distribution, our analysis is driven by estimating, and visualizing, the density [vdWB17, BIW19] of objects in individual scenes (images). In this setting, we make fewer assumptions about our data compared to clustering [TWB06], while still prioritizing the density estimates we compute when deriving a low‐dimensional embedding of objects, akin to DR methods.…”
Section: Related Workmentioning
confidence: 99%
“…In spatial statistics, kernel density estimation (KDE) is a non-parametric method USED for estimating a probability density function of the probability variable [27]. KDE was applied as a continuous replacement for the discrete histogram of the spatial distribution of fatal road accidents.…”
Section: Theoretical Considerationsmentioning
confidence: 99%
“…Namely, the Normal, the Epanechnikov, the Box, and the Triangle functions. Their form and efficiency are shown in Table 2 and Figure 1 [31,34].…”
Section: Theoretical Considerationsmentioning
confidence: 99%