To effectively overcome the cycle-skipping issue in full waveform inversion (FWI), we developed a deep neural network (DNN) approach to predict the absent low-frequency components by exploiting the hidden physical relation connecting the low- and the high-frequency data. To efficiently solve this challenging nonlinear regression problem, two novel strategies were proposed to design the DNN architecture and to optimize the learning process: (1) dual data feed structure; (2) progressive transfer learning. With the dual data feed structure, not only the high-frequency data, but also the corresponding beat tone data are fed into the DNN to relieve the burden of feature extraction. The second strategy, progressive transfer learning, enables us to train the DNN using a single evolving training dataset. Within the framework of the progressive transfer learning, the training dataset continuously evolves in an iterative manner by gradually retrieving the subsurface information through the physics-based inversion module, progressively enhancing the prediction accuracy of the DNN and propelling the inversion process out of the local minima. The synthetic numerical experiments suggest that, without any a priori geological information, the low-frequency data predicted by the progressive transfer learning are sufficiently accurate for an FWI engine to produce reliable subsurface velocity models free of cycle-skipping artifacts.
A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. The traditional method such as STA/LTA method performs poorly when signal-to-noise ratio (SNR) is low or near-surface geological structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel post-processing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new datasets. Particularly, we highlight the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and demonstrate its robustness via a weight adaptation approach to maintain the picking performance on new datasets. Our tests on synthetic datasets show the advantage of the proposed workflow compared with the existing deep-learning methods that focus only on segmentation performance. Our tests on field datasets illustrate that a good post-processing picking step is essential for correcting the segmentation errors and the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.