2018
DOI: 10.1080/10618600.2018.1424636
|View full text |Cite
|
Sign up to set email alerts
|

Local-Likelihood Transformation Kernel Density Estimation for Positive Random Variables

Abstract: The kernel estimator is known not to be adequate for estimating the density of a positive random variable X. The main reason is the well-known boundary bias problems that it suffers from, but also its poor behaviour in the long right tail that such a density typically exhibits. A natural approach to this problem is to first estimate the density of the logarithm of X, and obtaining an estimate of the density of X using standard results on functions of random variables ('back-transformation'). Although intuitive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 63 publications
(95 reference statements)
1
5
0
Order By: Relevance
“…We use the R package kde1d (version 1.0.2, Nagler and Vatter, 2019), which uses univariate local polynomial(log-quadratic) kernel density estimators. Our validation tests support the literature (Geenens and Wang, 2018) on the strength of this method. We find that confidence bounds are located more accurately with kde1d quantiles than with raw bootstrap quantiles, BCa quantiles, or with calibrated (double bootstrap) quantiles (see Efron and Tibshirani, 1993 for description of these methods) and that estimated distributions are more accurate (in integrated squared error) than standard kernel density estimation.…”
Section: (Continued)supporting
confidence: 84%
“…We use the R package kde1d (version 1.0.2, Nagler and Vatter, 2019), which uses univariate local polynomial(log-quadratic) kernel density estimators. Our validation tests support the literature (Geenens and Wang, 2018) on the strength of this method. We find that confidence bounds are located more accurately with kde1d quantiles than with raw bootstrap quantiles, BCa quantiles, or with calibrated (double bootstrap) quantiles (see Efron and Tibshirani, 1993 for description of these methods) and that estimated distributions are more accurate (in integrated squared error) than standard kernel density estimation.…”
Section: (Continued)supporting
confidence: 84%
“…For the present analysis, each age was replotted 10,000 times along a normal distribution using the rnorm command in R based on the laboratory generated mean and 1- SE. The KDE was created in the "kde1d" package in R (69). Bandwidth was set to default, with data-derived parameters developed by Sheather and Jones (70).…”
Section: Methodsmentioning
confidence: 99%
“…For example, a check-in with the latitude and longitude of a university belongs to the 'Educational' category, whereas a check-in location in different restaurants is in the 'Food' category. For the spatial analysis, we used ArcMap, the Kernel Density Estimation (KDE) technique [18], and geospatial data (Shape Files) from OpenStreetMap [19].…”
Section: Introductionmentioning
confidence: 99%