Summary Most geophysical inverse problems are nonlinear and rely upon numerical forward solvers involving discretization and simplified representations of the underlying physics. As a result, forward modeling errors are inevitable. In practice, such model errors tend to be either completely ignored, which leads to biased and over-confident inversion results, or only partly taken into account using restrictive Gaussian assumptions. Here, we rely on deep generative neural networks to learn problem-specific low-dimensional probabilistic representations of the discrepancy between high-fidelity and low-fidelity forward solvers. These representations are then used to probabilistically invert for the model error jointly with the target geophysical property field, using the computationally-cheap, low-fidelity forward solver. To this end, we combine a Markov-chain-Monte-Carlo (MCMC) inversion algorithm with a trained convolutional neural network of the spatial generative adversarial network (SGAN) type, whereby at each MCMC step, the simulated low-fidelity forward response is corrected using a proposed model-error realization. Considering the crosshole ground-penetrating radar traveltime tomography inverse problem, we train SGAN networks on traveltime discrepancy images between: (1) curved-ray (high fidelity) and straight-ray (low fidelity) forward solvers; and (2) finite-difference-time-domain (high fidelity) and straight-ray (low fidelity) forward solvers. We demonstrate that the SGAN is able to learn the spatial statistics of the model error and that suitable representations of both the subsurface model and model error can be recovered by MCMC. In comparison with inversion results obtained when model errors are either ignored or approximated by a Gaussian distribution, we find that our method has lower posterior parameter bias and better explains the observed traveltime data. Our method is most advantageous when high-fidelity forward solvers involve heavy computational costs and the Gaussian assumption of model errors is inappropriate. Unstable MCMC convergence due to nonlinearities introduced by our method remain a challenge to be addressed in future work.
<p><span lang="en-CA">We seek to develop a methodology enabling fast geostatistical simulations honoring both geophysical data and a complex prior model. Particularly, we consider a multiple-point statistics (MPS) framework in which a training image (TI) describes the available prior knowledge. Accurate posterior sampling is then possible by using a so-called extended Metropolis algorithm in which proposals are drawn from the prior using sequential geostatistical resampling. Such a Markov chain Monte Carlo (MCMC) algorithm will eventually locate and sample proportionally to the posterior distribution, however, it is often exceedingly slow and typically demands millions of MCMC iterations before the posterior is sampled sufficiently. We are developing a methodology in which the MPS simulation is built up iteratively pixel-by-pixel starting from an empty grid. At each pixel, multiple proposals are generated using an MPS algorithm and the proposals are accepted proportionally to the likelihood considering conditioning data in terms of linear averages (for instance geophysical data). The likelihood function is generally intractable as it depends on the pixels that have not yet been sampled. We approximate the likelihood function using a Gaussian model in which the posterior mean and covariance are updated sequentially as the simulation builds up. The posterior statistics are approximated by running the alg</span><span lang="en-CA">orithm multiple</span><span lang="en-CA"> times (sequentially or in parallel). Considering crosshole first-arrival ground-penetrating radar data, we assess the accuracy of our methodology both for multi-Gaussian priors for which analytical posteriors are available and for more complex training images against the extended Metropolis method. Our approach is inherently approximate due to the use of a finite training image, a finite number of candidates for each pixel and the need to approximate intractable likelihood functions. </span><span lang="en-CA">Nevertheless</span><span lang="en-CA">, </span><span lang="en-CA">preliminary </span><span lang="en-CA">results are promising as th</span><span lang="en-CA">is method</span><span lang="en-CA"> allow</span><span lang="en-CA">s</span><span lang="en-CA"> directly obtaining a reasonable</span> <span lang="en-CA">estimation </span><span lang="en-CA">at a reduced</span><span lang="en-CA"> computational cost compared to MCMC</span><span lang="en-CA">. </span></p>
<p><span>We propose a</span><span>n</span><span> approach for solving geophysical inverse problems which significantly reduces computational costs as compared to Markov chain Monte Carlo (MCMC) methods while providing enhanced uncertainty quantification as compared to efficient gradient-based deterministic methods. The propose</span><span>d</span><span> approach relies on variational inference (VI), which seeks to approximate the unnormalized posterior distribution parametrically for a given family of distributions by solving an optimization problem. Although prone to bias if the family of distributions is too limited, VI provides a computationally-efficient approach that scales well to high-dimensional problems. To enhance the expressiveness of the parameterized posterior in the context of geophysical inverse problems, we use a combination of VI and inverse autoregressive flows (IAF), a type of normalizing flows that has been shown to be efficient for machine learning tasks. The </span><span>IAF</span><span> consist</span><span>s</span><span> of invertible neural transport maps transforming an initial density of random variables into a target density, in which the mapping of each instance is conditioned on previous ones. In the combined VI-IAF routine, the approximate distribution is parameterized by the IAF, therefore, the potential expressiveness of the unnormalized posterior is determined by the architecture of the network. The parameters of the IAF are learned by minimizing the Kullback-Leibler divergence between the approximated posterior, which is obtained from samples drawn from a standard normal distribution that are pushed forward through the IAF, and the target posterior distribution. We test this approach on problems in which complex geostatistical priors are described by latent variables within a deep generative model (DGM) of the adversarial type. Previous results have concluded that inversion based on gradient-based optimization techniques perform poorly in this setting because of the high nonlinearity of the generator. Preliminary results involving linear physics suggest that the VI-IAF routine can recover the true model and provides high-quality uncertainty quantification at a low computational </span><span>cost</span><span>. As a next step, we will consider cases where the forward model is nonlinear and include comparison against standard MCMC sampling. As most of the inverse problem nonlinearity arises from the DGM generator, we do not expect significant differences in the quality of the approximations with respect to the linear physics case.</span></p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.