a b s t r a c tThe theory of quasi-arithmetic means represents a powerful tool in the study of covariance functions across space-time. In the present study we use quasi-arithmetic functionals to make inferences about the permissibility of averages of functions that are not, in general, permissible covariance functions. This is the case, e.g., of the geometric and harmonic averages, for which we obtain permissibility criteria. Also, some important inequalities involving covariance functions and preference relations as well as algebraic properties can be derived by means of the proposed approach. In particular, quasi-arithmetic covariances allow for ordering and preference relations, for a Jensen-type inequality and for a minimal and maximal element of their class. The general results shown in this paper are then applied to the study of spatial and spatio-temporal random fields. In particular, we discuss the representation and smoothness properties of a weakly stationary random field with a quasi-arithmetic covariance function. Also, we show that the generator of the quasiarithmetic means can be used as a link function in order to build a space-time nonseparable structure starting from the spatial and temporal margins, a procedure that is technically sound for those working with copulas. Several examples of new families of stationary covariances obtainable with this procedure are shown. Finally, we use quasi-arithmetic functionals to generalise existing results concerning the construction of nonstationary spatial covariances, and discuss the applicability and limits of this generalisation.
Functional data featured by a spatial dependence structure occur in many environmental sciences when curves are observed, for example, along time or along depth. Recently, some methods allowing for the prediction of a curve at an unmonitored site have been developed. However, the existing methods do not allow to include in a model exogenous variables that, for example, bring meteorology information in modeling air pollutant concentrations. In order to introduce exogenous variables, potentially observed as curves as well, we propose to extend the so-called kriging with external drift-or regression kriging-to the case of functional data by means of a three-step procedure involving functional modeling for the trend and spatial interpolation of functional residuals. A cross-validation analysis allows to choose smoothing parameters and a preferable kriging predictor for the functional residuals. Our case study considers daily PM 10 concentrations measured from October 2005 to March 2006 by the monitoring network of Piemonte region (Italy), with the trend defined by meteorological time-varying covariates and orographical constant-in-time variables. The performance of the proposed methodology is evaluated by predicting PM 10 concentration curves on 10 validation sites, even with simulated realistic datasets on a larger number of spatial sites. In this application the proposed methodology represents an alternative to spatio-temporal modeling but it can be applied more generally to spatially dependent functional data whose domain is not a time interval.
A function $\rho:[0,\infty)\to(0,1]$ is a completely monotonic function if and only if $\rho(\Vert\mathbf{x}\Vert^2)$ is positive definite on $\mathbb{R}^d$ for all $d$ and thus it represents the correlation function of a weakly stationary and isotropic Gaussian random field. Radial positive definite functions are also of importance as they represent characteristic functions of spherically symmetric probability distributions. In this paper, we analyze the function \[\rho(\beta ,\gamma)(x)=1-\biggl(\frac{x^{\beta}}{1+x^{\beta}}\biggr )^{\gamma},\qquad x\ge 0, \beta,\gamma>0,\] called the Dagum function, and show those ranges for which this function is completely monotonic, that is, positive definite, on any $d$-dimensional Euclidean space. Important relations arise with other families of completely monotonic and logarithmically completely monotonic functions.Comment: Published in at http://dx.doi.org/10.3150/08-BEJ139 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Voronoi estimators are non-parametric and adaptive estimators of the intensity of a point process. The intensity estimate at a given location is equal to the reciprocal of the size of the Voronoi/Dirichlet cell containing that location. Their major drawback is that they tend to paradoxically under-smooth the data in regions where the point density of the observed point pattern is high, and over-smooth where the point density is low. To remedy this behaviour, we propose to apply an additional smoothing operation to the Voronoi estimator, based on resampling the point pattern by independent random thinning. Through a simulation study we show that our resample-smoothing technique improves the estimation substantially. In addition, we study statistical properties such as unbiasedness and variance, and propose a rule-of-thumb and a data-driven cross-validation approach to choose the amount of smoothing to apply. Finally we apply our proposed intensity estimation scheme to two datasets: locations of pine saplings (planar point pattern) and motor vehicle traffic accidents (linear network point pattern).
Thinning strategies are a prime factor in generating spatial patterns in managed forests, and have a dramatic effect on stand development, and hence product yields. As trees generally have long life spans relative to the length of typical research projects, the design and analysis of complex long-term spatial-temporal experiments in forest stands is clearly difficult. This means that forest modelling is a key tool in the formulation and development of optimal management strategies. We show that the highly flexible Renshaw and Särkkä algorithm for modelling the space-time development of marked point processes is easily adapted to enable the comparative study of different thinning regimes. This procedure not only provides a powerful descriptor of forest stand growth, but there is considerable evidence that it is particularly robust to the accuracy of model choice. Two distinct thinning approaches are considered in conjunction with a variety of tree growth functions and both hard-and soft-core interaction functions. The results obtained strongly suggest that combining the immigration-growth-spatial interaction model with spatially explicit thinning algorithms produces a realistic and flexible mechanism for mimicking real forest scenarios.
1. Several spatial and non-spatial Cross-Validation (CV) methods have been used to perform map validation when additional sampling for validation purposes is not possible, yet it is unclear in which situations one CV method might be preferred over the other. Three factors have been identified as determinants of the performance of CV methods for map validation: the prediction area (geographical interpolation vs. extrapolation), the sampling pattern and the landscape spatial autocorrelation.In this study, we propose a new CV strategy that takes the geographical prediction space into account, and test how the new method compares with other established CV methods under different configurations of these three factors. We propose a variation ofLeave-One-Out (LOO) CV for map validation, called Nearest Neighbour Distance Matching (NNDM) LOO CV, in which the nearest neighbour distance distribution function between the test and training data during the CV process is matched to the nearest neighbour distance distribution function between the target prediction and training points. Using random forest as a machine learning algorithm, we then examine the suitability of NNDM LOO CV as well as the established LOO (non-spatial) and buffered-LOO (bLOO, spatial) CV methods in two simulations with varying prediction areas, landscape autocorrelation and sampling distributions.3. LOO CV provided good map accuracy estimates in landscapes with short autocorrelation ranges, or when estimating geographical interpolation map accuracy with randomly distributed samples. bLOO CV yielded realistic error estimates when estimating map accuracy in new prediction areas, but generally overestimated geographical interpolation errors. NNDM LOO CV returned reliable estimates in all scenarios we considered. 4. While LOO and bLOO CV provided reliable map accuracy estimates only in certain situations, our newly proposed NNDM LOO CV method returned robust estimates and generalised to LOO and bLOO CV whenever these methods were the most appropriate approach. Our work recognises the necessity of considering the geographical prediction space when designing CV-based methods for map validation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.