The duality between the maximum entropy method (MEM) of spectral analysis and the autoregressive (AR) representation of the data allows the application of recent advances in AR analysis to MEM in an attempt to obviate some shortcomings in this method of spectral decomposition. Specifically, this paper investigates the work of Akaike (1969a, b) on a criterion for choosing the length of the required prediction error filter and compares two methods of determining the filter coefficients. Recent work by Kromer (1970) on asymptotic properties of the AR spectral estimator is also of importance. Some preliminary results of the splitting of the normal modes of the earth are presented as an illustration of the application of MEM to geophysics.
We present an iterative nonparametric approach to spectral estimation that is particularly suitable for estimation of line spectra. This approach minimizes a cost function derived from Bayes' theorem. The method is suitable for line spectra since a "long tailed" distribution is used to model the prior distribution of spectral amplitudes. An important aspect of this method is that since the data themselves are used as constraints, phase information can also be recovered and used to extend the data outside the original window. The objective function is formulated in terms of hyperparameters that control the degree of fit and spectral resolution. Noise rejection can also be achieved by truncating the number of iterations. Spectral resolution and extrapolation length are controlled by a single parameter. When this parameter is large compared with the spectral powers, the algorithm leads to zero extrapolation of the data, and the estimated Fourier transform yields the periodogram. When the data are sampled at a constant rate, the algorithm uses one Levinson recursion per iteration. For irregular sampling (unevenly sampled and/or gapped data), the algorithm uses one Cholesky decomposition per iteration. The performance of the algorithm is illustrated with three different problems that frequently arise in geophysical data processing: 1) harmonic retrieval from a time series contaminated with noise; 2) linear event detection from a finite aperture array of receivers [which, in fact, is an extension of 1)], 3) interpolation/extrapolation of gapped data. The performance of the algorithm as a spectral estimator is tested with the Kay and Marple data set. It is shown that the achieved resolution is comparable with parametric methods but with more accurate representation of the relative power in the spectral lines.
The Radon transform (RT) suffers from the typical problems of loss of resolution and aliasing that arise as a consequence of incomplete information, including limited aperture and discretization. Sparseness in the Radon domain is a valid and useful criterion for supplying this missing information, equivalent somehow to assuming smooth amplitude variation in the transition between known and unknown (missing) data. Applying this constraint while honoring the data can become a serious challenge for routine seismic processing because of the very limited processing time available, in general, per common midpoint. To develop methods that are robust, easy to use and flexible to adapt to different problems we have to pay attention to a variety of algorithms, operator design, and estimation of the hyperparameters that are responsible for the regularization of the solution.In this paper, we discuss fast implementations for several varieties of RT in the time and frequency domains. An iterative conjugate gradient algorithm with fast Fourier transform multiplication is used in all cases. To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators. This turns out to be of particular importance, and it can be understood in terms of the singular vectors of the weighted transform. The iterative algorithm is stopped according to a general cross validation criterion for subspaces. We apply this idea to several known implementations and compare results in order to better understand differences between, and merits of, these algorithms.
We present a high-resolution procedure to reconstruct common-midpoint (CMP) gathers. First, we describe the forward and inverse transformations between offset and velocity space. Then, we formulate an underdetermined linear inverse problem in which the target is the artifacts-free, aperture-compensated velocity gather. We show that a sparse inversion leads to a solution that resembles the infinite-aperture velocity gather. The latter is the velocity gather that should have been estimated with a simple conjugate operator designed from an infinite-aperture seismic array. This high-resolution velocity gather is then used to reconstruct the offset space. The algorithm is formally derived using two basic principles. First, we use the principle of maximum entropy to translate prior information about the unknown parameters into a probabilistic framework, in other words, to assign a probability density function to our model. Second, we apply Bayes's rule to relate the a priori probability density function (pdf) with the pdf corresponding to the experimental uncertainties (likelihood function) to construct the a posteriori distribution of the unknown parameters. Finally the model is evaluated by maximizing the a posteriori distribution. When the problem is correctly regularized, the algorithm converges to a solution characterized by different degrees of sparseness depending on the required resolution. The solutions exhibit minimum entropy when the entropy is measured in terms of Burg's definition. We emphasize two crucial differences in our approach with the familiar Burg method of maximum entropy spectral analysis. First, Burg's entropy is minimized rather than maximized, which is equivalent to inferring as much as possible about the model from the data. Second, our approach uses the data as constraints in contrast with the classic maximum entropy spectral analysis approach where the autocorrelation function is the constraint. This implies that we recover not only amplitude information but also phase information, which serves to extrapolate the data outside the original aperture of the array. The tradeoff is controlled by a single parameter that under asymptotic conditions reduces the method to a damped least-squares solution. Finally, the high-resolution or aperture-compensated velocity gather is used to extrapolate near-and far-offset traces.
In this paper we show that given prior information in terms of a lower and upper bound, a prior bias, and constraints in terms of measured data, minimum relative entropy (MRE) yields exact expressions for the posterior probability density function (pdf) and the expected value of the linear inverse problem. In addition, we are able to produce any desired confidence intervals. In numerical simulations, we use the MRE approach to recover the release and evolution histories of plume in a one‐dimensional, constant known velocity and dispersivity system. For noise‐free data, we find that the reconstructed plume evolution history is indistinguishable from the true history. An exact match to the observed data is evident. Two methods are chosen for dissociating signal from a noisy data set. The first uses a modification of MRE for uncertain data. The second method uses “presmoothing” by fast Fourier transforms and Butterworth filters to attempt to remove noise from the signal before the “noise‐free” variant of MRE inversion is used. Both methods appear to work very well in recovering the true signal, and qualitatively appear superior to that of Skaggs and Kabala [1994]. We also solve for a degenerate case with a very high standard deviation in the noise. The recovered model indicates that the MRE inverse method did manage to recover the salient features of the source history. Once the plume source history has been developed, future behavior of a plume can then be cast in a probabilistic framework. For an example simulation, the MRE approach not only was able to resolve the source function from noisy data but also was able to correctly predict future behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.