The coefficient of coherence between two stationary time series was introduced by Wiener in 1930. It is related to the signal‐to‐noise ratio, to the minimum prediction error, and has important invariance properties. As an estimate of this parameter, most geophysicists have used the so‐called “sample coherence.” An approximate distribution of the sample coherence for Gaussian data has been derived by N. R. Goodman. We have tested this distribution by means of Monte Carlo experiments for validity and robustness (insensitivity to the Gaussian assumption). It has passed the tests. The Goodman distribution provides a means of constructing estimates of the true coherence which are better than the widely used sample coherence. It can also be used to calculate confidence intervals. Finally, it forms a basis for choosing the lag window and data window necessary for best estimation of the true coherence. For good estimates of the true coherence, two precautions must be observed: 1. The cross‐spectrum and power spectra of the two time series must be smoothly varying over the width of the spectral window. 2. The ratio of the length of the data window to the lag window must be large. For most seismic work the second requirement severely limits the spectral resolution. Examples show that large errors can result if this resolution is not sufficient to satisfy the first requirement. In many geophysical studies the parameter of interest is the signal‐to‐noise ratio. Because of its relation to the coherence, the Goodman distribution provides a basis for its estimation as well.
A long‐spacing velocity log contains almost the same information as an ideal short‐spacing log, but in a distorted form with added noise. The distortion can be thought of as a moving average or smoothing filter. Its inverse, called a “sharpening” filter by astronomers, amplifies noise. If the inverse is to be useful, it must be designed with a balance between errors due to noise amplification and those due to incomplete sharpening. The Wiener optimum filter theory gives a prescription for achieving this balance. The result is called an optimum inverse filter. We have calculated finite‐memory optimum inverse filters using the IBM 704. We have applied them to actual data, digitized in the field, to produce synthetic short‐spacing velocity logs. These we have compared with their field counterparts. The synthetic logs have less calibration error and are free from noise spikes. The general agreement is good.
Optimum systems have been developed to correspond to the sub‐optimum moveout discrimination systems presented previously by several authors. The seismic data on the lth trace is assumed to be additive signal S with moveout [Formula: see text], coherent noise N with moveout [Formula: see text], and incoherent noise [Formula: see text], expressed [Formula: see text] where S, N, and [Formula: see text] are independent, second order stationary random processes and [Formula: see text] and [Formula: see text] are random variables with prescribed probability density functions. The signal estimate S⁁ is produced by filtering each trace with its corresponding filter [Formula: see text] and summing the outputs [Formula: see text] We choose the system of filters [Formula: see text] to make the signal estimate optimum in the Wiener sense (minimum mean‐square error of the signal ensemble). For the special cases discussed, the moveouts are linear functions of the trace number l determined by the moveout/trace τ for signal and [Formula: see text] for noise. Thus, the optimum system is determined by the probability densities of τ and [Formula: see text] together with the noise/signal power spectrum ratios [Formula: see text] and [Formula: see text]. In comparison, suboptimum systems are controlled completely by the cut‐off moveout/trace [Formula: see text]. Events whose moveout/trace falls within [Formula: see text] of the expected dip moveout/trace are accepted, and those falling outside this range are suppressed. Suboptimum systems can be derived from optimum systems by choosing probability densities for τ and [Formula: see text] that are uniform within the above ranges and letting [Formula: see text] be very large. Optimum systems have increased flexibility over suboptimum systems due to control over the probability density functions and the power spectrum ratios and allow increased noise suppression in selected regions of f‐k space.
Using optimum filter theory as a starting point, we describe a method for the design of practical multi-trace seismic data processing systems. We assume the inputs to be the superposition of signal, coherent noise, and incoherent noise. The signal and coherent noise moveouts are described statistically by their probability densities. Our approach is to split the system into two stages. The first stage achieves optimum noise suppression but distorts the signal. The signal distortion is reduced in the second stage by an optimum finite memory inverse filter.The system that is obtained using our method of design depends upon the form of the probability density functions. We show two examples, ghost suppression and velocity filtering.In ghost suppression we choose a model with moveouts known exactly, which corresponds to delta functions for the probability densities.In velocity filtering the signal and coherent noise moveouts are equally probable within non-overlapping ranges.The resulting system in each case is both simple and effective. In ghost suppression a simple shift and subtract cancels the coherent noise. The signal distortion is reduced by an inverse filter. The velocity filter system consists of differentiated moving averages applied to each trace, followed by a go' phase shift and a low pass filter. DESIGN OF SUB-OPTIMUM FILTER SYSTEMS FOR MULTI-TRACE SEISMIC DATA PROCESSING I. IlztroductionThe use of Wiener optimum filter theory in multi-trace seismic data processing was proposed by Robinson (1954) and applied by Burg (1962). In this theory economic constraints are not imposed and the resulting filters are costly to construct and apply. Our purpose is to describe a method which goes beyond the optimum theory, producing practical filters whose performance is excellent. We begin by selecting a basic system which is optimum for high noise levels. For actual data this system effectively suppresses the noise but badly distorts the signal. We then apply an inverse filter to reduce the signal distortion.
The deconvolution process is widely used to enhance seismic data by suppressing distortions of the shot pulse caused by such things as reverberations and ghosts. The process consists of estimating the correlation function from the data, determining the inverse filter using the Levinson algorithm, and applying the inverse filter to the data. This paper is concerned with the estimation problem. Certain conclusions about the estimation problem are suggested by the theory of power spectra developed by Tukey and others. By means of a Monte Carlo simulation of the deconvolution process, we have tested these conclusions: (1) Severely distorted data should be prewhitened. (2) Tructors (lag windows) with the same number of degree of freedom yield the same error. (3) There is an optimum number of degree of freedom for a fixed data window. (4) Due to time variance in the data, there is an optimum length of data window. Monte Carlo simulation can be used to estimate the optimum values (3) and (4) and so improve the performance sof the deconvolution process.
It is common in petroleum exploration to loher a variety of instruments down a well to record rock properties as a function of depth. These recordings are called well logs and are of many different types. One which was developed in the early fifties is called the sonic log or continuous velocity log (CVL). This log is a recording of the compressional velocity in the vertical direction as a function of depth.With the availability of this log the question arose of its connection with data obtained from seismic surveys. This subject was addressed in a very influential paper by Peterson, Fillipone & Coker. The treatment was highly simplified mathematically. It was based on a layered earth approximation, and only treated the leading term in a decomposition of the seismogram by order of reflection. This leading term has come to be known as the primary reflection synthetic seismogram. Subsequent to this paper, higher order terms were calculated, so-called multiple reflection terms. All these calculations were based upon a layered approximation to the CVL. These layered approximations were physically realizable in the sense that they could actually be constructed from suitable materials and real physical experiments conducted with them. It is, of course, true in such models that there is both a reflection coefficient and transmission coefficient associated with each interface.In the mathematical approximations there has always been a question of whether and how to treat the transmission effects. In order to shed light on this problem we here start with the continuous formulation, rather than with a layered model. We then consider a physically realizable layered approximation. A related layered model without transmission losses is then formulated and by a limiting process converted to a continuous model. This latter model is then compared to the original continuous model to reveal explicitly the effects of transmission losses.It should be emphasized that the interest in mathematical approximations derives from their connection with the inverse problem: that of obtaining the rock properties from seismic data. In these days of fast digital computers there is no longer any problem in computing synthetic seismograms based upon one-dimensional wave propagation theory. For the inverse problem, which is basically a problem in statistical estimation theory, a decomposition of the exact solution by order of reflection remains important, and the effect of transmission losses is a key issue. 519
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.