Coherence measures applied to 3-D seismic data volumes have proven to be an effective method for imaging geological discontinuities such as faults and stratigraphic features. By removing the seismic wavelet from the data, seismic coherence offers interpreters a different perspective, often exposing subtle features not readily apparent in the seismic data. Several formulations exist for obtaining coherence estimates. The first three generations of coherence algorithms at Amoco are based, respectively, on cross correlation, semblance, and an eigendecomposition of the data covariance matrix. Application of these three generations to data from the Gulf of Mexico indicates that the implementation of the eigenstructure approach described in this paper produces the most robust results. This paper first introduces the basic eigenstructure approach for computing coherence followed by a comparison on data from the Gulf of Mexico. Next, Appendix A develops a theoretical connection between the well‐known semblance and the less well‐known eigenstructure measures of coherence in terms of the eigenvalues of the data covariance matrix. Appendix B further extends the analysis by comparing the semblance- and eigenstructure‐based coherence measures in the presence of additive uncorrelated noise.
We have used crosscorrelation, semblance, and eigenstructure algorithms to estimate coherency. The first two algorithms calculate coherency over a multiplicity of trial time lags or dips, with the dip having the highest coherency corresponding to the local dip of the reflector. Partially because of its greater computational cost, our original eigenstructure algorithm calculated coherency along an implicitly flat horizon. Although generalizing the eigenstructure algorithm to search over a range of test dips allowed us to image coherency in the presence of steeply dipping structures, we were somewhat surprised that this generalization concomitantly degenerated the quality of the fault images in flatter dip areas. Because it is a local estimation of reflector dip (including as few as five traces), the multidip coherency estimate provides an algorithmically correct, but interpretationally undesirable, estimate of the best apparent dip that explained the offset reflectors across a fault. We ameliorate this problem using two methods, both of which require the smoothing of a locally inaccurate estimate of regional dip. We then calculate our eigenstructure estimate of coherency only along the dip of the reflector, thereby providing maximum lateral resolution of reflector discontinuities. We are thus both better able to explain the superior results obtained by our earliest eigenstructure analysis along interpreted horizon slices, yet able to extend this resolution to steeply dipping reflectors on uninterpreted cubes of seismic data.
We give a detailed comparison of damping and difference smoothing as means of regularising inverse calculations. We show that damping is potentially disastrous in multiparameter inversions since the small singular values may control long-spatiai-wavelength features in the solution, whereas difference smoothing avoids this problem entirely by down-weighting the rough singular vectors wherever they happen to lie in the spectrum. Further, we show that regularisation can produce rather different results depending on whether the inversion is done via jumping or creeping. In particular, we find that if the inversion is regularised by difference smoothing, then jumping and creeping will give the same results only if the initial model is smooth. We illustrate these ideas by inverting refracted seismic arrivals to image the Earth's near-surface weathering layer. This 'refraction statics' problem has a fundamental long-wavelength ambiguity, so damping merely introduces undesirable long-wavelength perturbations to the solution.
Due to ill-conditioning of the linearised forward problem and the presence of noise in most data, inverse problems generally require some kind of 'regularisation' in order to generate physically plausible solutions. The most popular method of regularised inversion is damped least squares. Damping sometimes forces the solution to be smoother than it otherwise would be by raising all of the eigenvalues in an ad hoc fashion. An alternative is described, based upon the method of least-absolute deviation, which has a property known in the statistical literature as robustness. An account of robust inversion methods-their history and computational developments-is given. The key computational technique turns out to be preconditioned conjugate gradient, an algorithm which had as its genesis 'the method of orthogonal vectors' by Fox, Huskey and Wilkinson (1948). Applications are illustrated from seismic tomography and inverse scattering, two of the most computationally intensive tasks in inverse theory.
This paper describes the application of tomography t o seismic travel-time inversion. There are various implementations of travel-time tomography. In reflection tomography, sources and receivers are on the surface of the Earth and the principal seismic events are reflections from subsurface velocity discontinuities. In transmission tomography, sources and/or receivers may be buried beneath the surface and the events correspond to direct, or unreflected, arrivals; this is the analogue of medical tomography. There are also cases in which both direct as well as reflected arrivals are important, such as in Vertical Seismic Profiling. The latter is a direct application of the first two, but is not discussed in any detail here. It is also shown how the iterative use of travel-time tomography and depth migration can produce much enhanced subsurface images. Examples of both transmission tomography and reflection tomography combined with depth migration illustrate the methods.
S U M M A R YDue to ill-conditioning and the presence of noise in the data, tomographic inversion by traditional techniques usually requires some sort of smoothing, either to filter out small-scale variations in the solution which are beyond the resolution of the data or to produce smooth background models during the iterative inversion procedure. We shall describe the use of a-trimmed means for smoothing computed tomograms. The cu-trimmed means range continuously from the median to the mean. They are all efficient computationally and can be tailored to smooth the tomogram without destroying sharp geological features, such as faults and reflecting layers, and without introducing artifacts. Further, the median and trimmed means close to it are statistically robust in the sense that they reject bursts of noise whereas the mean simply averages the noise into the solution and can therefore be adversely affected by outliers in the data. All of these filters can be applied directly to the computed tomogram after solution, or during the solution phase as a means to stabilize the inversion.
Seismic inversion can be formulated by considering a linearized integral relation which is deduced from the wave equation. This Born inversion approach is equivalent to linear least‐squares inversion for a particular parameterization of the medium. The least‐squares solution is a member of a family of generalized LP norm solutions which are deduced from a maximum‐likelihood formulation. This formulation allows design of various statistical inversion solutions. We present two iterative solutions to the one‐dimensional (1-D) seismic inverse problem: the iterative least‐squares (ILS) and the iterative reweighted least‐squares (IRLS) methods. The ILS method involves solving a distorted background velocity problem after the initial least‐squares solution is obtained. The IRLS method is used as a robust regression technique which is better suited for dealing with certain types of noise and is computationally faster than ILS. Several numerical examples demonstrate that the IRLS method accurately estimates impedance profiles despite the presence of large‐amplitude noise spikes in the seismic traces. Numerical experiments suggest that the IRLS inversion can also be insensitive to noise bursts which are of a lower frequency band than noise spikes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.