Abstract:[1] A hybrid approach to the regularized inversion of highly parameterized environmental models is described. The method is based on constructing a highly parameterized base model, calculating base parameter sensitivities, and decomposing the base parameter normal matrix into eigenvectors representing principal orthogonal directions in parameter space. The decomposition is used to construct super parameters. Super parameters are factors by which principal eigenvectors of the base parameter normal matrix are mu… Show more
“…Truncating the parameter space based on these eigenvalues led to improved performance during the initial iterations, but may result in subobtimal parameter sets if the truncation limit is set too high. This difficulty bay be addressed by dynamically lowering the truncation limit, or by optimizing for superparameters (i.e., parameters that are aligned with the eigenvectors) as proposed by Tonkin and Doherty (2005).…”
Section: Resultsmentioning
confidence: 99%
“…Nevertheless, in certain applications, the objective function may contain well-justified contributions from regularization, which results in a formulation similar to the hybrid regularization methodology described by Tonkin and Doherty (2005). The problem we address is simpler in that it is only concerned with the minimization algorithm rather than the formulation of the inverse problem itself.…”
Section: Introductionmentioning
confidence: 99%
“…If truncation is merely employed to obtain a more efficient solution to an otherwise robust, overdetermined inverse problem, k can be adjusted heuristically as the minimization proceeds, so that more and more parameters may enter the calibration solution space. Tonkin and Doherty (2005) propose setting k as the number of eigenvalues that exhibit a ratio to the largest eigenvalue that is greater than 10 -6 . As an alternative to this empirical threshold value, the dimension of the calibration solution space can be chosen such that the final value of the objective function (excluding contributions from Tikhonov regularization) is commensurate with the expected measurement noise level (Finsterle and Pruess, 1995;Tonkin and Doherty, 2005;Moore and Doherty, 2005), or that a predefined maximum prediction uncertainty is not exceeded (in case the inversion is part of an estimation-prediction framework).…”
“…Tonkin and Doherty (2005) propose setting k as the number of eigenvalues that exhibit a ratio to the largest eigenvalue that is greater than 10 -6 . As an alternative to this empirical threshold value, the dimension of the calibration solution space can be chosen such that the final value of the objective function (excluding contributions from Tikhonov regularization) is commensurate with the expected measurement noise level (Finsterle and Pruess, 1995;Tonkin and Doherty, 2005;Moore and Doherty, 2005), or that a predefined maximum prediction uncertainty is not exceeded (in case the inversion is part of an estimation-prediction framework). Evidently, choosing the appropriate truncation level is not a straightforward task, may be problem dependent, and thus needs some further experimentation.…”
We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-GaussNewton steps are taken for independent parameters with high impact. The performance of 1 Corresponding author; SAFinsterle@lbl.gov, Earth Sciences Division, 1 Cyclotron Road, MS 90-1116, Berkeley, CA 94720; phone: (510) 486-5205; fax: (510) the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.
“…Truncating the parameter space based on these eigenvalues led to improved performance during the initial iterations, but may result in subobtimal parameter sets if the truncation limit is set too high. This difficulty bay be addressed by dynamically lowering the truncation limit, or by optimizing for superparameters (i.e., parameters that are aligned with the eigenvectors) as proposed by Tonkin and Doherty (2005).…”
Section: Resultsmentioning
confidence: 99%
“…Nevertheless, in certain applications, the objective function may contain well-justified contributions from regularization, which results in a formulation similar to the hybrid regularization methodology described by Tonkin and Doherty (2005). The problem we address is simpler in that it is only concerned with the minimization algorithm rather than the formulation of the inverse problem itself.…”
Section: Introductionmentioning
confidence: 99%
“…If truncation is merely employed to obtain a more efficient solution to an otherwise robust, overdetermined inverse problem, k can be adjusted heuristically as the minimization proceeds, so that more and more parameters may enter the calibration solution space. Tonkin and Doherty (2005) propose setting k as the number of eigenvalues that exhibit a ratio to the largest eigenvalue that is greater than 10 -6 . As an alternative to this empirical threshold value, the dimension of the calibration solution space can be chosen such that the final value of the objective function (excluding contributions from Tikhonov regularization) is commensurate with the expected measurement noise level (Finsterle and Pruess, 1995;Tonkin and Doherty, 2005;Moore and Doherty, 2005), or that a predefined maximum prediction uncertainty is not exceeded (in case the inversion is part of an estimation-prediction framework).…”
“…Tonkin and Doherty (2005) propose setting k as the number of eigenvalues that exhibit a ratio to the largest eigenvalue that is greater than 10 -6 . As an alternative to this empirical threshold value, the dimension of the calibration solution space can be chosen such that the final value of the objective function (excluding contributions from Tikhonov regularization) is commensurate with the expected measurement noise level (Finsterle and Pruess, 1995;Tonkin and Doherty, 2005;Moore and Doherty, 2005), or that a predefined maximum prediction uncertainty is not exceeded (in case the inversion is part of an estimation-prediction framework). Evidently, choosing the appropriate truncation level is not a straightforward task, may be problem dependent, and thus needs some further experimentation.…”
We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-GaussNewton steps are taken for independent parameters with high impact. The performance of 1 Corresponding author; SAFinsterle@lbl.gov, Earth Sciences Division, 1 Cyclotron Road, MS 90-1116, Berkeley, CA 94720; phone: (510) 486-5205; fax: (510) the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.
“…PEST provides two basic types of regularization for underdetermined inverse problems: Tikhonov regularization and a method based on Truncated Singular Value Decomposition (TSVD) known as SVD-assist (Tonkin and Doherty, 2005;Doherty, 2010;Doherty and others, 2010). For this study, Tikhonov regularization was the method of choice because correlation and instability within DVRFS v. 2.0 are the result of the presence of physical processes (Hill and Østerby, 2003), as well as a large number of hydraulic properties.…”
[1] Geological storage of CO 2 requires multiphase flow models coupled with key hydrogeologic features to accurately predict the long-term consequences. The prediction uncertainty during geological CO 2 storage requires a computationally efficient and practically useful framework. This paper presents a comparative study between ensemblebased filtering algorithms (En-As) and calibration-constrained null-space Monte Carlo (NSMC) methods. For the En-As, we use the ensemble Kalman filter (EnKF), ensemble smoother (ES), ES with multiple data assimilation (ES-MDA), and EnKF and ES with the pilot point method. For the NSMC calibrated models with various parameterization, schemes are tested and single and multiple NSMC (M-NSMC) methods are used. A synthetic case with two layers was developed to mimic an actual CO 2 injection pilot test where one injection and two observation wells are located within a short distance. Observed data include bottom hole pressure at injection well and gas saturation (S g ) at two observation wells in the upper layer. Model parameters include horizontal permeability and porosity. Comparison of results shows that both methodologies yield good history match and reasonable prediction results in a computationally efficient way. In particular, the ES-MDA and M-NSMC resulted in smaller objective function values and lower prediction uncertainties of S g profiles compared to other variants tested in this work. The En-As with the pilot point method have higher variability of permeability compared to those without one, but the En-As show smoother permeability fields compared to the NSMC methods. This is because stochastic randomness at a grid scale was included to generate NSMC fields. Both ensemble-based and NSMC algorithms are unable to correct the structural orientation of the prior ensemble members using only the sparse dynamic data from wells, while they obtain reasonable history match, suggesting that structural uncertainty should be incorporated into prior information. Overall, the ES-MDA has an advantage in terms of computational efficiency, but at the expense of additional computation M-NSMC shows applicability for highly nonlinear problems such as multiphase flow problems.Citation: Tavakoli, R., H. Yoon, M. Delshad, A. H. ElSheikh, M. F. Wheeler, and B. W. Arnold (2013), Comparison of ensemble filtering algorithms and null-space Monte Carlo for parameter estimation and uncertainty quantification using CO 2 sequestration data,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.