To estimate near‐surface time anomalies, it is commonly assumed that apparent seismic reflection times are comprised of the sum of “surface‐consistent” source and receiver static terms, “subsurface‐consistent” structure and residual normal moveout (RNMO) terms, and indeterminate noise. The model parameters (statics, RNMO, and structural terms) that, in a least‐squares sense, best satisfy traveltime observations in multifold seismic data are solutions to a set of linear simultaneous equations. Because these equations are ill conditioned and their solutions are known to be nonunique, conventional direct methods of solution are not applicable. Problems of this type which have both overdetermined and underconstrained aspects can be analyzed using the general linear inverse methodology. In this approach, observed time deviations are decomposed into linear combinations of orthogonal eigenvectors, each of which determines a related linear combination of model parameters. A property of this decomposition is that the uncertainty (standard deviation) in a model‐parameter eigenvector is functionally related to the uncertainty in its associated observation eigenvector. In particular, statics corrections having spatial wavelengths much shorter than a cable length have smaller uncertainties than do the observations themselves, whereas long‐wavelength corrections have much larger standard deviations and are thus poorly determined. In practice, iterative methods are commonly used to solve the large number of equations encountered for typical seismic profiles. Using the Gauss‐Seidel iterative formalism, we can know in advance how many iterations are required to obtain a given reduction of the original error for any wavelength contribution. Errors in shorter‐wavelength corrections converge rapidly to zero while a heavier price is exacted to compute longer‐wavelength corrections. However, because those longer‐wavelength corrections can be estimated only with large uncertainty, it is desirable to exclude them from the statics solution through judicious choice of the number of iteration cycles.
Aliasing is generally understood to mean that sampling causes those frequencies above the Nyquist frequency to be irretrievably “mixed” with those below. As a result, the perceived need to prevent signal aliasing has played a major role in limiting useable signal bandwidth. Yet, the evidence of aliasing in multichannel seismic data is often paradoxical and contradictory, suggesting that aliasing may be more apparent than real. A simple, exact sample‐mapping methodology, random‐sample‐interval imaging, can be used to overcome aliasing in many of the processes used currently for the imaging of seismic data. The robust process recovers broadband signal, on both synthetic and real data, with frequencies significantly above the Nyquist limit predicted by the 1-D sampling theorem. The method appears to be applicable whenever the signal trajectory is intersected irregularly by a sampling grid of two or more dimensions. The results suggest that both spatial and temporal aliasing of signal may be resolved simultaneously by this strategy.
A method is presented for processing multi-channel seismic data that recovers, simultaneously and unambiguously, signal frequencies above and below the temporal Nyquist of the input data, By exploiting signal stationarity and redundancy, I show that the sample interval of a single input channel does not uniquely determine the maximum recoverable frequency. The method is tested using normal moveout (NMO) and stack on synthetic model data. Frequencies of 10 to 225 Hz are recovered from input data sampled at 4 ms. I then apply the method to real data. The results demonstrate that antialias strategies based on the one-dimensional Shannon-Whittaker sampling theorem (l) impose an unnecessary limit on the ability to recover high frequencies and maximize signal resolution. The method is applicable wherever the trajectory of the signal to be imaged irregularly intersects the sampling grid. It is appropriate for resolving both temporal and spatial aliasing concerns. Introduction Since the inception of digital signal processing, concerns about signal aliasing have played a major role in determining useable signal bandwidth and the cost of obtaining that bandwidth. For example, the authors of a 1991 article in The Leading Edge observed that "..the major use of Nyquist's work in geophysics is the elimination of alias frequencies on digitally recorded seismic data. The Nyquist frequency is the highest frequency that can be obtained for a given sampling interval." (2). Such concerns have led the seismic industry to systematically restrict potential signal frequencies to below the Nyquist frequency by the application of antialiasing filters based on the Whittaker-Shannon sampling theorem. This paper describes a simple methodology for the recovery of signal components in excess of the Nyquist limit predicted by the one-dimensional Whittaker-Shannon sampling theorem. With this method, an optimum balance of image bandwidth and signal-to-noise ratio may be achieved by simple processing parameter choices. The methodology is applicable whenever the signal trajectory is irregularly intersected by a sampling grid of two or more dimensions. Aliasing It is generally accepted that the digital sampling interval and the upper limit of the recoverable signal spectrum are inexorably linked. Authors usually cite the one-dimensional Whittaker-Shannon sampling theorem which states that the maximum recoverable frequency (? Nyq) in an evenly-sampled function is given by:(mathematical equation)(available in full paper) where ?t is the sampling interval (in seconds). Frequencies in the input function prior to sampling, which are in excess of this Nyquist frequency, are, if not removed prior to sampling, said to be aliased (or folded) in the sampled output. Aliasing is generally understood to mean that those frequencies above ? Nyq are irretrievably lost due to being "mixed" with those below ? Nyq. Commonly, aliasing has been dealt with by one or both of two methods: Alias reduction method #1 Antialias filtering is almost always used to significantly reduce contamination by aliased frequencies whenever analog data are sampled (e.g., during field recording) or whenever the sampling interval is increased. While usually successful in preventing anticipated aliasing problems, this filtering can result in a loss of valuable signal both above and below ? Nyq. Also, additional equipment and/or processing costs are usually incurred.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.