We present a rank reduction algorithm that permits simultaneous reconstruction and random noise attenuation of seismic records. We based our technique on multichannel singular spectrum analysis (MSSA). The technique entails organizing spatial data at a given temporal frequency into a block Hankel matrix that in ideal conditions is a matrix of rank [Formula: see text], where [Formula: see text] is the number of plane waves in the window of analysis. Additive noise and missing samples will increase the rank of the block Hankel matrix of the data. Consequently, rank reduction is proposed as a means to attenuate noise and recover missing traces. We present an iterative algorithm that resembles seismic data reconstruction with the method of projection onto convex sets. In addition, we propose to adopt a randomized singular value decomposition to accelerate the rank reduction stage of the algorithm. We apply MSSA reconstruction to synthetic examples and a field data set. Synthetic examples were used to assess the performance of the method in two reconstruction scenarios: a noise-free case and data contaminated with noise. In both cases, we found extremely low reconstructions errors that are indicative of an optimal recovery. The field data example consists of a 2D prestack volume that depends on common midpoint and offset. We use the MSSA reconstruction method to complete missing offsets and, at the same time, increase the signal-to-noise ratio of the seismic volume.
We present an iterative nonparametric approach to spectral estimation that is particularly suitable for estimation of line spectra. This approach minimizes a cost function derived from Bayes' theorem. The method is suitable for line spectra since a "long tailed" distribution is used to model the prior distribution of spectral amplitudes. An important aspect of this method is that since the data themselves are used as constraints, phase information can also be recovered and used to extend the data outside the original window. The objective function is formulated in terms of hyperparameters that control the degree of fit and spectral resolution. Noise rejection can also be achieved by truncating the number of iterations. Spectral resolution and extrapolation length are controlled by a single parameter. When this parameter is large compared with the spectral powers, the algorithm leads to zero extrapolation of the data, and the estimated Fourier transform yields the periodogram. When the data are sampled at a constant rate, the algorithm uses one Levinson recursion per iteration. For irregular sampling (unevenly sampled and/or gapped data), the algorithm uses one Cholesky decomposition per iteration. The performance of the algorithm is illustrated with three different problems that frequently arise in geophysical data processing: 1) harmonic retrieval from a time series contaminated with noise; 2) linear event detection from a finite aperture array of receivers [which, in fact, is an extension of 1)], 3) interpolation/extrapolation of gapped data. The performance of the algorithm as a spectral estimator is tested with the Kay and Marple data set. It is shown that the achieved resolution is comparable with parametric methods but with more accurate representation of the relative power in the spectral lines.
The Radon transform (RT) suffers from the typical problems of loss of resolution and aliasing that arise as a consequence of incomplete information, including limited aperture and discretization. Sparseness in the Radon domain is a valid and useful criterion for supplying this missing information, equivalent somehow to assuming smooth amplitude variation in the transition between known and unknown (missing) data. Applying this constraint while honoring the data can become a serious challenge for routine seismic processing because of the very limited processing time available, in general, per common midpoint. To develop methods that are robust, easy to use and flexible to adapt to different problems we have to pay attention to a variety of algorithms, operator design, and estimation of the hyperparameters that are responsible for the regularization of the solution.In this paper, we discuss fast implementations for several varieties of RT in the time and frequency domains. An iterative conjugate gradient algorithm with fast Fourier transform multiplication is used in all cases. To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators. This turns out to be of particular importance, and it can be understood in terms of the singular vectors of the weighted transform. The iterative algorithm is stopped according to a general cross validation criterion for subspaces. We apply this idea to several known implementations and compare results in order to better understand differences between, and merits of, these algorithms.
We present a high-resolution procedure to reconstruct common-midpoint (CMP) gathers. First, we describe the forward and inverse transformations between offset and velocity space. Then, we formulate an underdetermined linear inverse problem in which the target is the artifacts-free, aperture-compensated velocity gather. We show that a sparse inversion leads to a solution that resembles the infinite-aperture velocity gather. The latter is the velocity gather that should have been estimated with a simple conjugate operator designed from an infinite-aperture seismic array. This high-resolution velocity gather is then used to reconstruct the offset space. The algorithm is formally derived using two basic principles. First, we use the principle of maximum entropy to translate prior information about the unknown parameters into a probabilistic framework, in other words, to assign a probability density function to our model. Second, we apply Bayes's rule to relate the a priori probability density function (pdf) with the pdf corresponding to the experimental uncertainties (likelihood function) to construct the a posteriori distribution of the unknown parameters. Finally the model is evaluated by maximizing the a posteriori distribution. When the problem is correctly regularized, the algorithm converges to a solution characterized by different degrees of sparseness depending on the required resolution. The solutions exhibit minimum entropy when the entropy is measured in terms of Burg's definition. We emphasize two crucial differences in our approach with the familiar Burg method of maximum entropy spectral analysis. First, Burg's entropy is minimized rather than maximized, which is equivalent to inferring as much as possible about the model from the data. Second, our approach uses the data as constraints in contrast with the classic maximum entropy spectral analysis approach where the autocorrelation function is the constraint. This implies that we recover not only amplitude information but also phase information, which serves to extrapolate the data outside the original aperture of the array. The tradeoff is controlled by a single parameter that under asymptotic conditions reduces the method to a damped least-squares solution. Finally, the high-resolution or aperture-compensated velocity gather is used to extrapolate near-and far-offset traces.
In seismic data processing, we often need to interpolate and extrapolate data at missing spatial locations. The reconstruction problem can be posed as an inverse problem where, from inadequate and incomplete data, we attempt to reconstruct the seismic wavefield at locations where measurements were not acquired. We propose a wavefield reconstruction scheme for spatially band‐limited signals. The method entails solving an inverse problem where a wavenumber‐domain regularization term is included. The regularization term constrains the solution to be spatially band‐limited and imposes a prior spectral shape. The numerical algorithm is quite efficient since the method of conjugate gradients in conjunction with fast matrix–vector multiplications, implemented via the fast Fourier transform (FFT), is adopted. The algorithm can be used to perform multidimensional reconstruction in any spatial domain.
Summary Simulated annealing was used to invert fundamental and higher‐mode Rayleigh wave dispersion curves simultaneously for an S‐wave velocity profile. The inversion was applied to near‐surface seismic data (with a maximum depth of investigation of around 10 m) obtained over a thick lacustrine clay sequence. The geology was described either in terms of discrete layers or by a superposition of Chebyshev polynomials in the inversion and the contrasting results compared. Simulated annealing allows for considerable flexibility in model definition and parametrization and seeks a global rather than a local minimum in a misfit function. It has the added advantage in that it can be used to determine uncertainties in inversion parameters, thereby highlighting features in an inverted profile that should be interpreted with caution. Results show that simulated annealing works well for the inversion of multimodal near‐surface Rayleigh wave dispersion curves relative to the same inversion that employs only the fundamental mode.
It is well known that a sparse hyperbolic Radon transform (RT) can be used to extend the aperture of aperture limited data, filter noise, and fill gaps. In the same manner, an elliptical RT can achieve similar results when applied to slant stack sections. A problem with these transformations is that they have a time‐variant kernel that results in slow implementation. By defining a model space in terms of an irregularly sampled velocity space to minimize the number of unknowns during the inversion and using sparse matrices, however, the computation time can be kept low enough for practical application. We implement hyperbolic and elliptical time domain RTs by using inversion via weighted conjugate gradient methods with a sparseness constraint. The hyperbolic RT performs accurate interpolation in common‐midpoint (CMP) gathers, while the elliptical RT attenuates sampling artifacts in slant stack sections obtained from CMP gathers with poor sampling and gaps.
It is unclear whether one can (or should) write a tutorial about Bayes. It is a little like writing a tutorial about the sense of humor. However, this tutorial is about the Bayesian approach to the solution of the ubiquitous inverse problem. Inasmuch as it is a tutorial, it has its own special ingredients. The first is that it is an overview; details are omitted for the sake of the grand picture. In fractal language, it is the progenitor of the complex pattern. As such, it is a vision of the whole. The second is that it does, of necessity, assume some ill‐defined knowledge on the part of the reader. Finally, this tutorial presents our view. It may not appeal to, let alone be agreed to, by all.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.