Abstract:Abstract:We address the problem of non-parametric estimation of the recently proposed measures of statistical dispersion of positive continuous random variables. The measures are based on the concepts of differential entropy and Fisher information and describe the "spread" or "variability" of the random variable from a different point of view than the ubiquitously used concept of standard deviation. The maximum penalized likelihood estimation of the probability density function proposed by Good and Gaskins is … Show more
“…However, the estimation of these coefficients from data is more problematic. We present the obtained estimations of the dispersion coefficients based on the MPL method, which extends our previous study [3]. …”
“…However, the estimation of these coefficients from data is more problematic. We present the obtained estimations of the dispersion coefficients based on the MPL method, which extends our previous study [3]. …”
“…There Huber [26] found a unique density with minimal FI given a set of k 2 samples from the cumulative distribution function. Kostal and Pokora [27] adapted the maximized penalized likelihood method of Good and Gaskins [28] to compute the FI. Kostal and Pokora rejected the use of a kernel density estimation (KDE) for the direct computation of the FI because no appropriate bandwidth parameter to control of the p /p term in Eq.…”
Section: Introductionmentioning
confidence: 99%
“…Kostal and Pokora rejected the use of a kernel density estimation (KDE) for the direct computation of the FI because no appropriate bandwidth parameter to control of the p /p term in Eq. (1) is known [27].…”
The Fisher information matrix (FIM) is a widely used measure for applications including statistical inference, information geometry, experiment design, and the study of criticality in biological systems. The FIM is defined for a parametric family of probability distributions and its estimation from data follows one of two paths: either the distribution is assumed to be known and the parameters are estimated from the data or the parameters are known and the distribution is estimated from the data. We consider the latter case which is applicable, for example, to experiments where the parameters are controlled by the experimenter and a complicated relation exists between the input parameters and the resulting distribution of the data. Since we assume that the distribution is unknown, we use a nonparametric density estimation on the data and then compute the FIM directly from that estimate using a finite-difference approximation to estimate the derivatives in its definition. The accuracy of the estimate depends on both the method of nonparametric estimation and the difference Δθ between the densities used in the finite-difference formula. We develop an approach for choosing the optimal parameter difference Δθ based on large deviations theory and compare two nonparametric density estimation methods, the Gaussian kernel density estimator and a novel density estimation using field theory method. We also compare these two methods to a recently published approach that circumvents the need for density estimation by estimating a nonparametric f divergence and using it to approximate the FIM. We use the Fisher information of the normal distribution to validate our method and as a more involved example we compute the temperature component of the FIM in the two-dimensional Ising model and show that it obeys the expected relation to the heat capacity and therefore peaks at the phase transition at the correct critical temperature.
“…We address the problem of non-parametric estimation of the probability density function as a description of the probability distribution of noncorrelated interspike intervals (ISI) in records of neuronal activity. We also continue our previous effort [ 1 , 2 ] to propose alternative estimators of the variability measures. Kernel density estimators are probably the most frequently used non-parametric estimators of the probability distribution.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.