2016
DOI: 10.1080/01621459.2015.1044091
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets

Abstract: Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

10
709
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 493 publications
(719 citation statements)
references
References 44 publications
10
709
0
Order By: Relevance
“…Gaussian processes (GPs) are widely used for modeling unknown spatial surfaces such as w(s), due to their convenient formulation as a multivariate Gaussian prior for the spatial random effect, unparalleled predictive performance (53), and ease of generating uncertainty-quantified predictions at unobserved locations. We use the computationally effective nearest-neighbor GP (27), which nicely embeds into the Bayesian hierarchical setup as a prior for w(s) in the second stage of the model specification. All technical specifications of the Bayesian spatial model are provided in SI Appendix, section S1.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Gaussian processes (GPs) are widely used for modeling unknown spatial surfaces such as w(s), due to their convenient formulation as a multivariate Gaussian prior for the spatial random effect, unparalleled predictive performance (53), and ease of generating uncertainty-quantified predictions at unobserved locations. We use the computationally effective nearest-neighbor GP (27), which nicely embeds into the Bayesian hierarchical setup as a prior for w(s) in the second stage of the model specification. All technical specifications of the Bayesian spatial model are provided in SI Appendix, section S1.…”
Section: Methodsmentioning
confidence: 99%
“…More critically, the first two global methodologies emphasized estimating a single trait value per PFT at every location, whereas both ground-based (5,14) and remotely sensed (26) observations suggest that at ecosystem or landscape scales traits would be better represented by distributions. Here, we use an updated version of the largest global database of plant traits (14) coupled with modern Bayesian spatial statistical modeling techniques (27) to capture local and global variability in plant traits. This combination allows the representation of trait variation both within pixels on a gridded land surface and across global environmental gradients.…”
mentioning
confidence: 99%
“…Ignoring them in order to work with much smaller, n-sized, matrices will bring a big computational savings with little impact on accuracy. This is a sensible idea: It can be shown to induce a valid stochastic process (Datta, Banerjee, Finley, and Gelfand 2016); when n 1000 the method is fast and accurate, and as n grows the predictions increasingly resemble their full N -data counterparts; and, for smaller n, V n (x) is organically inflated relative to V N (x), acknowledging greater uncertainty in approximation. The simplest version of such a scheme would be via nearest neighbors (NN): X n (x) comprised of closest elements of X N to x.…”
Section: Local Approximate Gaussian Process Modelsmentioning
confidence: 99%
“…Sparse approximation techniques either shrink the covariance of distant pairs of spatial locations to zero for yielding a sparse covariance matrix (Furrer et al (2006) ;Kaufman et al (2008)), or assume Gaussian-Markov property of the spatial random field for yielding a sparse precision matrix (Rue and Tjelmeland (2002) ;Lindgren et al (2011)). Another way for inducing sparsity of the precision matrix is to use conditional likelihoods (e.g., Vecchia (1988)), and most recently, Datta et al (2016) proposed a nearest-neighbor GP (NNGP); a new permutation and grouping method for improving the performance of NNGP can be found in Guinness (2016). Most recent "hybrid" methods extending the low-rank models include Nychka et al (2015); Katzfuss (2017); Ma and Kang (2017).…”
Section: Introductionmentioning
confidence: 99%
“…We first extend the nearest-neighbor GP models developed by Datta et al (2016) to construct a nearest neighboring block GP model. We then propose to apply it to approximate the residual covariance and combine this approximated residual covariance with a reduced-rank predictive process.…”
Section: Introductionmentioning
confidence: 99%