Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of "pilot points" as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.
An equation is derived through which the variance of predictive error of a calibrated model can be calculated. This equation has two terms. The first term represents the contribution to predictive error variance that results from an inability of the calibration process to capture all of the parameterization detail necessary for the making of an accurate prediction. If a model is “uncalibrated,” with parameter values being supplied solely through “outside information,” this is the only term required. The second term represents the contribution to predictive error variance arising from measurement noise. In an overdetermined system, such as that which may be obtained through “parameter lumping” (e.g., through the introduction of a spatial zonation scheme), this is the only term required. It is shown, however, that parameter lumping is a form of “implicit regularization” and that ignoring the implied first term of the predictive error variance equation can potentially lead to underestimation of predictive error variance. A model's role as a predictor of environmental behavior can be enhanced if it is calibrated in such a way as to reduce the variance of those predictions which it is required to make. It is shown that in some circumstances this can be accomplished through “overfitting” against historical field data. It can also be accomplished by giving greater weight to those measurements which carry the greatest information content with respect to a required prediction. This suggests that a departure may be necessary from the custom of using a single “calibrated model” for the making of many different predictions. Instead, model calibration may need to be repeated many times so that in each case the calibration process is optimized for the making of a specific model prediction.
[1] A hybrid approach to the regularized inversion of highly parameterized environmental models is described. The method is based on constructing a highly parameterized base model, calculating base parameter sensitivities, and decomposing the base parameter normal matrix into eigenvectors representing principal orthogonal directions in parameter space. The decomposition is used to construct super parameters. Super parameters are factors by which principal eigenvectors of the base parameter normal matrix are multiplied in order to minimize a composite least squares objective function. These eigenvectors define orthogonal axes of a parameter subspace for which information is available from the calibration data. The coordinates of the solution are sought within this subspace. Super parameters are estimated using a regularized nonlinear Gauss-Marquardt-Levenberg scheme. Though super parameters are estimated, Tikhonov regularization constraints are imposed on base parameters. Tikhonov regularization mitigates over fitting and promotes the estimation of reasonable base parameters. Use of a large number of base parameters enables the inversion process to be receptive to the information content of the calibration data, including aspects pertaining to small-scale parameter variations. Because the number of super parameters sustainable by the calibration data may be far less than the number of base parameters used to define the original problem, the computational burden for solution of the inverse problem is reduced. The hybrid methodology is described and applied to a simple synthetic groundwater flow model. It is then applied to a real-world groundwater flow and contaminant transport model. The approach and programs described are applicable to a range of modeling disciplines.Citation: Tonkin, M. J., and J. Doherty (2005), A hybrid regularized inversion methodology for highly parameterized environmental models, Water Resour. Res., 41, W10412,
[1] "Structural noise" is a term often used to describe model-to-measurement misfit that cannot be ascribed to measurement noise and therefore must be ascribed to the imperfect nature of a numerical model as a simulator of reality. As such, it is often the dominant contributor to model-to-measurement misfit. As the name "structural noise" implies, this type of misfit is often treated as an additive term to measurement noise when assessing model parameter and predictive uncertainty. This paper inquires into the nature of defectinduced model-to-measurement misfit and provides a conceptual basis for accommodating it. It is shown that inasmuch as defect-induced model-to-measurement misfit can be characterized as "noise," this noise is likely to show a high degree of spatial and temporal correlation; furthermore, its covariance matrix may approach singularity. However, the deleterious impact of structural noise on the model calibration process may be mitigated in a variety of ways. These include adoption of a highly parameterized approach to model construction and calibration (including the strategic use of compensatory parameters where appropriate), processing of observations and their model-generated counterparts in ways that are able to filter out structural noise prior to fitting one to the other, and/or through implementation of a weighting strategy that gives prominence to observations that most resemble predictions required of a model.
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
[1] We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models. When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization. In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD ''superparameter'' approach described by Tonkin and Doherty (2005). Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations. Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters. This difference is projected onto the calibration null-space and added to the calibrated parameters. If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values. The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs. Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets.Citation: Tonkin, M., and J. Doherty (2009), Calibration-constrained Monte Carlo analysis of highly parameterized models using subspace techniques, Water Resour. Res., 45, W00B10,
[1] Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.Citation: Doherty, J., and S. Christensen (2011), Use of paired simple and complex models to reduce predictive bias and quantify uncertainty, Water Resour. Res., 47, W12534,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.