Stacking spectra provide maximum-likelihood estimates for the stacking velocity, or for the ray parameter, of well separated reflections in additive white noise. However, the resolution of stacking spectra is limited by the aperture of the array and the frequency of the data. Despite these limitations, parametric spectral estimation methods achieve better resolution than does stacking. To improve resolution, the parametric methods introduce a parsimonious model for the spectrum of the data. In particular, when the data are modeled as the superposition of wavefronts, the properties of the eigenstructure of the data covariance matrix can be used to obtain high-resolution spectra. The traditional stacking spectra can also be expressed as a function of the data covariance matrix and directly compared to the eigenstructure spectra. The superiority of the latter in separating closely interfering reflections is then apparent from a simple geometric interpretation. Eigenstructure methods were originally developed for use with narrow-band signals. while seismic reflections are wide-band and transient in time Taking advantage of the full bandwidth of seismic data, we average spectra from several frequency bands. We choose each frequency band wide enough, so that we can average over time estimates of the covariance matrix. Thus, we obtain a robust estimate of the covariance matrix from short data sequences. A field-data example shows that the high-resolution estimators are particularly attractive for use in the estimation of local spectra in which short arrays are considered. Several realistic synthetic examples of stacking-velocity spectra illustrate the improved performance of the new methods in comparison with conventional processing.
Free-surface-related multiples in marine seismic data are commonly attenuated using adaptive subtraction of the predicted multiple energy. An alternative method, based on deconvolution of the upgoing wavefield by the downgoing wavefield, was previously applied to ocean-bottom data. We apply the deconvolution method to towed-streamer data acquired in an over/under configuration. We also use direct arrival deconvolution that results in source wavelet designature only, as a benchmark to verify the full multiple deconvolution result. Detailed synthetic data analysis, including sensitivity tests, explains each data processing step and its effects on the final result. We then apply this verified preprocessing sequence to field data from the Kristin area of the North Sea, with a focus on the direct arrival prediction using the near-field hydrophone method. Prestack evaluation of the results shows that the method applied to the field data provides designature, source-side deghosting, and attenuation of multiples. We show comparable stacked results from our method and from 2D iterative surface-related multiple elimination. The workflow has the benefit that it does not require an adaptive subtraction step or iterative application. However, an accurate direct arrival prediction is essential for the successful application of the method. This prediction is obtained using near-field hydrophone measurements that can be recorded with some commercial acquisition systems.
Field survey characteristics can have an important impact on the quality of multiples predicted by surface-related multiple elimination (SRME) algorithms. This paper examines the effects of three particular characteristics: inline spatial sampling, source stability, and cable feathering. Inadequate spatial sampling causes aliasing artifacts. These can be reduced by f-k filtering at the expense of limiting the bandwidth in the predicted multiples. Source signature variations create artifacts in predicted multiples due to spatial discontinuities. Variations from a well-behaved air-gun array produced artifacts having an rms amplitude about 26 dB below the rms amplitude of multiples predicted with no variations. Cable feathering has a large impact on the timing errors in multiples predicted by 2-D SRME when it is applied in areas having cross dip. All of these problems can be reduced by a combination of better survey design, use of advanced data acquisition technologies, and additional data processing steps. Introduction In this paper we assume that surface-related multiple elimination (or SRME) is the algorithm of choice for removing multiples from marine seismic data. SRME is commonly practiced as a two-step process. First, surface multiples are predicted by a multidimensional convolutionlike iterative process in which the prediction operators consist of the recorded seismic traces themselves.1 Second, the predicted multiples are adaptively subtracted from the original traces. Under ideal conditions SRME can produce nearly perfect multiple attenuation in a single inversion-like step. Under realistic conditions, however, such an inversion usually produces poor results because of errors that occur in the multiple prediction. Adaptive subtraction and the two-step process attempt to compensate for those errors. Residual multiples occur when the errors in the predicted multiples are too large or too variable to be handled adequately by adaptive subtraction. Since even relatively weak residual multiples can cause serious interpretation difficulties in seismic data sets, geophysicists strive to minimize them. Although the SRME multiple prediction algorithm requires no information about the subsurface geology, it does place strict requirements on the data acquisition process.2 For example, SRME requires full knowledge of the acquisition wavelet and complete, regular spatial sampling of the surface wavefield. Most seismic surveys fail to meet these requirements, resulting in the aforementioned errors in predicted surface multiples. There are three ways of reducing these errors: more appropriate data acquisition practices and survey design, enhancements to the multiple prediction and adaptive subtraction processing algorithms, and application of pre-processing steps to tailor the data for multiple prediction. To select the best method for reducing a particular type of error requires an understanding of exactly how that error is related to various field survey characteristics. Suppose, for example, that a geophysicist is designing a 3-D seismic survey in an area where the direction of the prevailing currents is perpendicular to the predominant dip direction. Is better error reduction achieved by cross-dip shooting, thereby minimizing the effects of cable feathering, or by dip shooting, thereby minimizing subsurface-related 3-D effects in the recorded wavefield? This question can be answered only after a careful analysis of the errors expected in the predicted multiples.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.