Kernel smoothing is routinely used for the estimation of relative risk based on point locations of disease cases and sampled controls over a geographical region. Typically, fixed-bandwidth kernel estimation has been employed, despite the widely recognized problems experienced with this methodology when the underlying densities exhibit the type of spatial inhomogeneity frequently seen in geographical epidemiology. A more intuitive approach is to utilize a spatially adaptive, variable smoothing parameter. In this paper, we examine the properties of the adaptive kernel estimator by both asymptotic analysis and a simulation study, finding advantages over the fixed kernel approach in both the cases. We also look at practical issues with implementation of the adaptive relative risk estimator (including bandwidth choice and boundary correction), and develop a computationally inexpensive method for generating tolerance contours to highlight areas of significantly elevated risk.
Kernel smoothing is a highly flexible and popular approach for estimation of probability density and intensity functions of continuous spatial data. In this role, it also forms an integral part of estimation of functionals such as the density-ratio or "relative risk" surface. Originally developed with the epidemiological motivation of examining fluctuations in disease risk based on samples of cases and controls collected over a given geographical region, such functions have also been successfully used across a diverse range of disciplines where a relative comparison of spatial density functions has been of interest. This versatility has demanded ongoing developments and improvements to the relevant methodology, including use spatially adaptive smoothers; tests of significantly elevated risk based on asymptotic theory; extension to the spatiotemporal domain; and novel computational methods for their evaluation. In this tutorial paper, we review the current methodology, including the most recent developments in estimation, computation, and inference. All techniques are implemented in the new software package sparr, publicly available for the R language, and we illustrate its use with a pair of epidemiological examples.
Kernel smoothing is a popular approach to estimating relative risk surfaces from data on the locations of cases and controls in geographical epidemiology. The interpretation of such surfaces is facilitated by plotting of tolerance contours which highlight areas where the risk is sufficiently high to reject the null hypothesis of unit relative risk. Previously it has been recommended that these tolerance intervals be calculated using Monte Carlo randomization tests. We examine a computationally cheap alternative whereby the tolerance intervals are derived from asymptotic theory. We also examine the performance of global tests of hetereogeneous risk employing statistics based on kernel risk surfaces, paying particular attention to the choice of smoothing parameters on test power.
In this paper we present a novel inference methodology to perform Bayesian inference for spatiotemporal Cox processes where the intensity function depends on a multivariate Gaussian process. Dynamic Gaussian processes are introduced to allow for evolution of the intensity function over discrete time. The novelty of the method lies on the fact that no discretisation error is involved despite the non-tractability of the likelihood function and infinite dimensionality of the problem. The method is based on a Markov chain Monte Carlo algorithm that samples from the joint posterior distribution of the parameters and latent variables of the model. A particular choice of the dominating measure to obtain the likelihood function is shown to be crucial to devise a valid MCMC. The models are defined in a general and flexible way but they are amenable to direct sampling from the relevant distributions, due to careful characterisation of its components. The models also allow for the inclusion of regression covariates and/or temporal components to explain the variability of the intensity function. These components may be subject to relevant interaction with space and/or time. Real and simulated examples illustrate the methodology, followed by concluding remarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.