The remarkable complexity of soil and its importance to a wide range of ecosystem services presents major challenges to the modeling of soil processes. Although major progress in soil models has occurred in the last decades, models of soil processes remain disjointed between disciplines or ecosystem services, with considerable uncertainty remaining in the quality of predictions and several challenges that remain yet to be addressed. First, there is a need to improve exchange of knowledge and experience among the different disciplines in soil science and to reach out to other Earth science communities. Second, the community needs to develop a new generation of soil models based on a systemic approach comprising relevant physical, chemical, and biological processes to address critical knowledge gaps in our understanding of soil processes and their interactions. Overcoming these challenges will facilitate exchanges between soil modeling and climate, plant, and social science modeling communities. It will allow us to contribute to preserve and improve our assessment of ecosystem services and advance our understanding of climate-change feedback mechanisms, among others, thereby facilitating and strengthening communication among scientific disciplines and society. We review the role of modeling soil processes in quantifying key soil processes that shape ecosystem services, with a focus on provisioning and regulating services. We then identify key challenges in modeling soil processes, including the systematic incorporation of heterogeneity and uncertainty, the integration of data and models, and strategies for effective integration of knowledge on physical, chemical, and biological soil processes. We discuss how the soil modeling community could best interface with modern modeling activities in other disciplines, such as climate, ecology, and plant research, and how to weave novel observation and measurement techniques into soil models. We propose the establishment of an international soil modeling consortium to coherently advance soil modeling activities and foster communication with other Earth science disciplines. Such a consortium should promote soil modeling platforms and data repository for model development, calibration and intercomparison essential for addressing contemporary challenges.
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
[1] Ensemble Kalman filters (EnKFs) are a successful tool for estimating state variables in atmospheric and oceanic sciences. Recent research has prepared the EnKF for parameter estimation in groundwater applications. EnKFs are optimal in the sense of Bayesian updating only if all involved variables are multivariate Gaussian. Subsurface flow and transport state variables, however, generally do not show Gaussian dependence on hydraulic log conductivity and among each other, even if log conductivity is multi-Gaussian. To improve EnKFs in this context, we apply nonlinear, monotonic transformations to the observed states, rendering them Gaussian (Gaussian anamorphosis, GA). Similar ideas have recently been presented by Béal et al. (2010) in the context of state estimation. Our work transfers and adapts this methodology to parameter estimation. Additionally, we address the treatment of measurement errors in the transformation and provide several multivariate analysis tools to evaluate the expected usefulness of GA beforehand. For illustration, we present a first-time application of an EnKF to parameter estimation from 3-D hydraulic tomography in multi-Gaussian log conductivity fields. Results show that (1) GA achieves an implicit pseudolinearization of drawdown data as a function of log conductivity and (2) this makes both parameter identification and prediction of flow and transport more accurate. Combining EnKFs with GA yields a computationally efficient tool for nonlinear inversion of data with improved accuracy. This is an attractive benefit, given that linearization-free methods such as particle filters are computationally extremely demanding.
Scalar mixing plays a significant role for transport in geophysical flows because it controls dilution and is a main driver for many chemical reactions. Here we study the local scale flow mechanisms that lead to enhanced scalar mixing, and how they impact on the global mixing behavior. Mixing is quantified in terms of the entropy of the scalar distribution. It is shown that the evolution of entropy is directly linked to the flow topology in terms of the Okubo‐Weiss parameter Θ. Dominant shear and stretching deformation (Θ > 0) leads to a strong increase of local mixing strength, while dominant vorticity (Θ < 0) has only a minor impact. This allows to delineate regions of increased scalar mixing potential by mapping out the spatial distribution of Θ(x), and to relate global scalar mixing to an areal averaged effective Okubo‐Weiss measure.
[1] Geostatistical optimal design optimizes subsurface exploration for maximum information toward task-specific prediction goals. Until recently, most geostatistical design studies have assumed that the geostatistical description (i.e., the mean, trends, covariance models and their parameters) is given a priori. This contradicts, as emphasized by Rubin and Dagan (1987a), the fact that only few or even no data at all offer support for such assumptions prior to the bulk of exploration effort. We believe that geostatistical design should (1) avoid unjustified a priori assumptions on the geostatistical description, (2) instead reduce geostatistical model uncertainty as secondary design objective, (3) rate this secondary objective optimal for the overall prediction goal, and (4) be robust even under inaccurate geostatistical assumptions. Bayesian Geostatistical Design follows these guidelines by considering uncertain covariance model parameters. We transfer this concept from kriging-like applications to geostatistical inverse problems. We also deem it inappropriate to consider parametric uncertainty only within a single covariance model. The Matérn family of covariance functions has an additional shape parameter. Controlling model shape by a parameter converts covariance model selection to parameter identification and resembles Bayesian model averaging over a continuous spectrum of covariance models. This is appealing since it generalizes Bayesian model averaging from a finite number to an infinite number of models. We illustrate how our approach fulfills the above four guidelines in a series of synthetic test cases. The underlying scenarios are to minimize the prediction variance of (1) contaminant concentration or (2) arrival time at an ecologically sensitive location by optimal placement of hydraulic head and log conductivity measurements. Results highlight how both the impact of geostatistical model uncertainty and the sampling network design vary according to the choice of objective function.Citation: Nowak, W., F. P. J. de Barros, and Y. Rubin (2010), Bayesian geostatistical design: Task-driven optimal site investigation when the geostatistical model is uncertain, Water Resour. Res., 46, W03535,
[1] Pumping tests belong to the most common techniques of hydrogeological site assessment. While the steady state drawdown is determined by the distribution of transmissivity alone, the transient behavior is also influenced by the storativity field. In geostatistical inverse modeling the spatial distributions of both transmissivity and storativity are inferred from the drawdown curves and prior information on the spatial correlation of the parameter fields. So far, however, transient data have hardly been analyzed by geostatistical inverse methods because the computational effort is rather high. In the present study, we characterize the drawdown by its temporal moments. We present moment-generating equations and corresponding equations to compute the sensitivity of the temporal moments of drawdown with respect to the distributions of transmissivity and storativity. We utilize these equations to infer the transmissivity and storativity fields from transient pumping tests using the quasi-linear geostatistical approach of inverse modeling. Considering temporal moments rather than full drawdown curves drastically reduces the computational effort of the estimation procedure. In test cases we show that the first two temporal moments are sufficient to characterize the drawdown curves. We investigate how erroneous assumptions regarding the spatial variability of storativity affect the estimate of the transmissivity field, and we analyze the effect of truncating the measured drawdown curves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.