2016
DOI: 10.1162/neco_a_00835
|View full text |Cite
|
Sign up to set email alerts
|

Direct Density Derivative Estimation

Abstract: Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 42 publications
(71 reference statements)
0
8
0
Order By: Relevance
“…However, the iterative formula involves the first and second derivatives of the marginal log-likelihood, and estimates of the derivatives of a density produced through kernel methods are notoriously unstable (see Chapter 3 of Silverman (1986)). There exist different approaches to deal with this problem (see Sasaki et al (2016) and Shen & Ghosal (2017)).…”
Section: = σmentioning
confidence: 99%
“…However, the iterative formula involves the first and second derivatives of the marginal log-likelihood, and estimates of the derivatives of a density produced through kernel methods are notoriously unstable (see Chapter 3 of Silverman (1986)). There exist different approaches to deal with this problem (see Sasaki et al (2016) and Shen & Ghosal (2017)).…”
Section: = σmentioning
confidence: 99%
“…First of all, to lower the complexity with increasing size of datasets, noisy gradient estimation with mini-batch of samples has been widely used in (Chen, Fox, and Guestrin 2014;Ma, Chen, and Fox 2015;Strathmann 2018;Li, Zhang, and Li 2018). In addition to the direct estimation, some regression-based methods have been studied that can learn to predict the gradient (Sasaki, Noh, and Sugiyama 2015;Sasaki et al 2016). Furthermore (Filippone and Engler 2015) studied to use conjugate gradient for sampling from Gaussian process with an unbiased solver.…”
Section: Related Workmentioning
confidence: 99%
“…When using Gaussian kernels, it consumes time significantly to select an optimal bandwidth for Kernel due to the dimensionality and size of dataset (Raykar and Duraiswami 2006); and 3) Computational Complexity. It is quite time consuming to compute the derivatives of Kernel density model (Sasaki, Noh, and Sugiyama 2015;Sasaki et al 2016), i.e., gradients of Log-SumExp functions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The mutual information estimation by the local Gaussian approximation is developed in [16]. Note that various deep results (including the central limit theorem) were obtained for the Kullback -Leibler estimates under certain conditions imposed on derivatives of unknown densities (see, e.g., the recent papers [2], [24], [33]). Our goal is to provide wide conditions for the asymptotic unbiasedness and L 2 -consistency of the Kullback -Leibler divergence estimates (1.3), as n, m → ∞, without such smoothness hypothesis.…”
Section: Introductionmentioning
confidence: 99%