Full-waveform inversion is challenging in complex geologic areas. Even when provided with an accurate starting model, the inversion algorithms often struggle to update the velocity model. Compared with other areas in applied geophysics, including prior information in full-waveform inversion is still in its relative infancy. In part, this is due to the fact that it is difficult to incorporate prior information that relates to geologic settings where strong discontinuities in the velocity model dominate, because these settings call for nonsmooth regularizations. We tackle this problem by including constraints on the spatial variations and value ranges of the inverted velocities, as opposed to adding penalties to the objective, which is more customary in mainstream geophysical inversion. By demonstrating the lack of predictability of edge-preserving inversion when the regularization is in the form of an added penalty term, we advocate the inclusion of constraints instead. Our examples show that the latter leads to more predictable results and to significant improvements in the delineation of salt bodies when these constraints are relaxed gradually in combination with extending the search space to approximately fit the observed data but not the noise.
Detecting a specific horizon in seismic images is a valuable tool for geological interpretation. Because hand-picking the locations of the horizon is a time-consuming process, automated computational methods were developed starting three decades ago. Older techniques for such picking include interpolation of control points however, in recent years neural networks have been used for this task. Until now, most networks trained on small patches from larger images. This limits the networks ability to learn from large-scale geologic structures. Moreover, currently available networks and training strategies require label patches that have full and continuous annotations, which are also time-consuming to generate.We propose a projected loss-function for training convolutional networks with a multi-resolution structure, including variants of the U-net. Our networks learn from a small number of large seismic images without creating patches. The projected loss-function enables training on labels with just a few annotated pixels and has no issue with the other unknown label pixels. Training uses all data without reserving some for validation. Only the labels are split into training/testing. Contrary to other work on horizon tracking, we train the network to perform non-linear regression, and not classification. As such, we propose labels as the convolution of a Gaussian kernel and the known horizon locations that indicate uncertainty in the labels. The network output is the probability of the horizon location. We demonstrate the proposed computational ingredients on two different datasets, for horizon extrapolation and interpolation. We show that the predictions of our methodology are accurate even in areas far from known horizon locations because our learning strategy exploits all data in large seismic images.
Nonlinear inverse problems are often hampered by local minima because of missing low frequencies and far offsets in the data, lack of access to good starting models, noise, and modeling errors. A well-known approach to counter these deficiencies is to include prior information on the unknown model, which regularizes the inverse problem. Although conventional regularization methods have resulted in enormous progress in ill-posed (geophysical) inverse problems, challenges remain when the prior information consists of multiple pieces. To handle this situation, we have developed an optimization framework that allows us to add multiple pieces of prior information in the form of constraints. The proposed framework is more suitable for full-waveform inversion (FWI) because it offers assurances that multiple constraints are imposed uniquely at each iteration, irrespective of the order in which they are invoked. To project onto the intersection of multiple sets uniquely, we use Dykstra’s algorithm that does not rely on trade-off parameters. In that sense, our approach differs substantially from approaches, such as Tikhonov/penalty regularization and gradient filtering. None of these offer assurances, which makes them less suitable to FWI, where unrealistic intermediate results effectively derail the inversion. By working with intersections of sets, we avoid trade-off parameters and keep objective calculations separate from projections that are often much faster to compute than objectives/gradients in 3D. These features allow for easy integration into existing code bases. Working with constraints also allows for heuristics, where we built up the complexity of the model by a gradual relaxation of the constraints. This strategy helps to avoid convergence to local minima that represent unrealistic models. Using multiple constraints, we obtain better FWI results compared with a quadratic penalty method, whereas all definitions of the constraints are in terms of physical units and follow from the prior knowledge directly.
Major mineral discoveries have declined in recent decades, and the natural resource industry is in the process of adapting and incorporating novel technologies such as machine learning and artificial intelligence to help guide the next generation of exploration. One such development is an artificial intelligence architecture called VNet that uses deep learning and convolutional neural networks. This method is designed specifically for use with geoscience data and is suitable for a multitude of exploration applications. One such application is mineral prospectivity in which the machine is tasked with identifying the complex pattern between many layers of geoscience data and a particular commodity of interest, such as gold. The VNet algorithm is designed to recognize patterns at different spatial scales, which lends itself well to the mineral prospectivity problem of there often being local and regional trends that affect where mineralization occurs. We test this approach on an orogenic gold greenstone belt setting in the Canadian Arctic where the algorithm uses gold values from sparse drill holes for training purposes to predict gold mineralization elsewhere in the region. The prospectivity results highlight new target areas, and one such target was followed up with a direct-current induced polarization survey. A chargeability anomaly was discovered wherein the VNet had predicted gold mineralization, and subsequent drilling encountered a 6 g/t Au intercept within 10 m of drilling that averaged more than 1.0 g/t Au. Although most of the prospectivity targets generated from VNet were not drill tested, this first intercept helps validate the approach. We believe this method can help maximize the use of existing geoscience data for successful and efficient exploration programs in the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.