We compare two Monte Carlo inversions that aim to solve some of the main problems of dispersion curve inversion: deriving reliable uncertainty appraisals, determining the optimal model parameterization and avoiding entrapment in local minima of the misfit function. The first method is a transdimensional Markov chain Monte Carlo that considers as unknowns the number of model parameters, that is the locations of layer boundaries together with the Vs and the Vp/Vs ratio of each layer. A reversible‐jump Markov chain Monte Carlo algorithm is used to sample the variable‐dimension model space, while the adoption of a parallel tempering strategy and of a delayed rejection updating scheme improves the efficiency of the probabilistic sampling. The second approach is a Hamiltonian Monte Carlo inversion that considers the Vs, the Vp/Vs ratio and the thickness of each layer as unknowns, whereas the best model parameterization (number of layer) is determined by applying standard statistical tools to the outcomes of different inversions running with different model dimensionalities. This work has a mainly didactic perspective and, for this reason, we focus on synthetic examples in which only the fundamental mode is inverted. We perform what we call semi‐analytical and seismic inversion tests on 1D subsurface models. In the first case, the dispersion curves are directly computed from the considered model making use of the Haskell–Thomson method, while in the second case they are extracted from synthetic shot gathers. To validate the inversion outcomes, we analyse the estimated posterior models and we also perform a sensitivity analysis in which we compute the model resolution matrices, posterior covariance matrices and correlation matrices from the ensembles of sampled models. Our tests demonstrate that major benefit of the transdimensional inversion is its capability of providing a parsimonious solution that automatically adjusts the model dimensionality. The downside of this approach is that many models must be sampled to guarantee accurate posterior uncertainty. Differently, less sampled models are required by the Hamiltonian Monte Carlo algorithm, but its limits are the computational effort related to the Jacobian computation, and the multiple inversion runs needed to determine the optimal model parameterization.
We have compared the performances of six recently developed global optimization algorithms: imperialist competitive algorithm, firefly algorithm (FA), water cycle algorithm (WCA), whale optimization algorithm (WOA), fireworks algorithm (FWA), and quantum particle swarm optimization (QPSO). These methods have been introduced in the past few years and have found very limited or no applications to geophysical exploration problems thus far. We benchmark the algorithms’ results against the particle swarm optimization (PSO), which is a popular and well-established global search method. In particular, we are interested in assessing the exploration and exploitation capabilities of each method as the dimension of the model space increases. First, we test the different algorithms on two multiminima and two convex analytic objective functions. Then, we compare them using the residual statics corrections and 1D elastic full-waveform inversion, which are highly nonlinear geophysical optimization problems. Our results demonstrate that FA, FWA, and WOA are characterized by optimal exploration capabilities because they outperform the other approaches in the case of optimization problems with multiminima objective functions. Differently, QPSO and PSO have good exploitation capabilities because they easily solve ill-conditioned optimizations characterized by a nearly flat valley in the objective function. QPSO, PSO, and WCA offer a good compromise between exploitation and exploration.
Genetic algorithms (GAs) usually suffer from the so-called genetic-drift effect, which reduces the genetic variability within the evolving population making the algorithm converge toward a local minimum of the objective function. We have developed an innovative method to attenuate such a genetic-drift effect that we named the drift-avoidance GA (DAGA). The implemented method combines some principles of niched GAs (NGAs), catastrophic GAs, crowding GAs, and the Monte Carlo algorithm (MCA) with the aim of maintaining an optimal genetic diversity within the evolving population, thus avoiding premature convergence. The DAGA performance is first tested on different analytic objective functions often used to test optimization algorithms. In this case, the implemented DAGA approach is compared with standard GAs, catastrophic GAs, crowding GAs, NGAs, and MCA. Then, the DAGA and the NGAs approaches are compared on two well-known nonlinear geophysical optimization problems characterized by objective functions with complex topologies: residual statics corrections and 2D acoustic full-waveform inversion. To draw general conclusions, we limit the attention to synthetic seismic optimizations. Our tests prove that the DAGA approach grants the convergence in case of objective functions with very complex topologies, where other GA implementations (such as standard GAs or NGAs) fail to converge. Differently, in case of simpler topologies, DAGA achieves similar performances with the other GA implementations considered. The DAGA approach may have a slightly higher or lower computational cost than standard GA or NGA methods, depending on its convergence speed, that is, on its ability to reduce the number of forward modelings with respect to the other methods.
In this work, we describe an experiment concerning global–local full‐waveform inversion, carried out on a P‐wave seismic reflection profile that was acquired at Luni, an archaeological site in Italy. The global full‐waveform inversion makes use of a two‐grid genetic algorithm scheme and recorded refraction and diving waves, to build an initial velocity model of the subsurface. Two important pieces of a priori information which help to better constrain the inversion results are the refraction velocity model and the Dix‐converted semblance velocity field obtained from time processing. A good match between observed and predicted data allows us to use the estimated velocity field as the starting point for a local, gradient‐based full‐waveform inversion that inverts the recorded data (except the surface waves). The final estimated velocity field shows two main discontinuities: one is very shallow and related to the refractor velocity model used and the other corresponds to the strongest reflection event observed in the pre‐stack depth‐migrated section, at a depth of 100 m. The pre‐stack depth‐migrated common image gathers provide evidence of a good horizontal alignment of this reflection, indicating an accurate velocity estimation down to 100 m depth that corresponds to the maximum offset used in the acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.