Robust estimation of power spectra, coherences, and transfer functions is investigated in the context of geophysical data processing. The methods described are frequency-domain extensions of current techniques from the statistical literature and are applicable in cases where section-averaging methods would be used with data that are contaminated by local nonstationarity or isolated outliers. The paper begins with a review of robust estimation theory, emphasizing statistical principles and the maximum likelihood or M-estimators. These are combined with section-averaging spectral techniques to obtain robust estimates of power spectra, coherences, and transfer functions in an automatic, data-adaptive fashion. Because robust methods implicitly identify abnormal data, methods for monitoring the statistical behavior of the estimation process using quantile-quantile plots are also discussed. The results are illustrated using a variety of examples from electromagnetic geophysics. INTRODUCTIONReliable estimation of power spectra for single data sequences or of transfer functions and coherences between multiple time series is of central importance in many areas of geophysics and engineering. While the effects of the underlying Gaussian distributional assumptions on such estimates are generally understood, the ability of a small fraction of non-Gaussian noise or localized nonstationarity to affect them is not. These phenomena can destroy conventional estimates, often in a manner that is difficult to detect.Problems with conventional (i.e., nonrobust) time series procedures arise because they are essentially copies of classical statistical procedures parameterized by frequency. Once Fourier transforms are taken, estimating a spectrum is the same process as computing a variance, and estimating a transfer function is a similar procedure to linear regression. Because these methods are based on the least squares or Gaussian maximum likelihood approaches to statistical inference, their advantages include simplicity and the optimality properties established by the Gauss- Paper number 5B5911 0148-0227/87/005B-5911505.00 residuals are drawn from a multivariate normal probability distribution, then the least squares result is also a maximum likelihood, fully efficient, minimum variance estimate. In practice, the regression model is rarely an accurate description due to departures of the data from the model requirements. Most data contain a small fraction of unusual observations or "outliers" that do not fit the model distribution or share the characteristics of the bulk of the sample. These can often be described by a probability distribution which has a nearly Gaussian shape in the center and tails which are heavier than would be expected for a normal one, or by mixtures of Gaussian distributions with different variances.Two forms of data outliers are common: point defects and local nonstationarity. Point defects are isolated outliers that exist independent of the structure of the process under study. In this paper the principles o...
We have compiled both laboratory and worldwide field data on electrical conductivity to help understand the physical implications of deep crustal electrical profiles. Regional heat flow was used to assign temperatures to each layer in regional electrical conductivity models; we avoided those data where purely conductive heat flow suggested temperatures more than about 1000°C, substantially higher than solidus temperatures and outside the range of validity of heat flow models. The resulting plots of log conductivity σ versus 1/T demonstrate that even low‐conductivity layers (LCL) have conductivities several orders of magnitude higher than dry laboratory samples and that the data can be represented by straight line fits. In addition, technically active regions show systematically higher conductivities than do shield areas. Because volatiles are usually lost in laboratory measurements and their absence is a principal difference between laboratory and field conditions, these materials probably account for the relatively higher conductivities of rocks in situ in the crust; free water in amounts of 0.01–0.1% in fracture porosity could explain crustal conductivities. Other possibilities are graphite, hydrated minerals in rare instances, and sulfur in combination with other volatiles. As most of the temperatures are less than 700°C, partial melting seems likely only in regions of highest heat flow where the conductive temperature profiles are inappropriate. Another result is that at a given temperature, crustal high‐conductivity layers (HCL) are more conductive by another order of magnitude and show more scatter than do LCL's. Because the differences between HCL's and LCL's are independent of temperature, we must invoke more than temperature increases as a cause for large conductivity increases; increased fluid concentration in situ seems a probable cause for enhanced conductivities in HCL's. From the point of view of these observations, it does not matter whether the fluids are in communication with the surface or trapped at lithostatic pressures.
The gravity method was the first geophysical technique to be used in oil and gas exploration. Despite being eclipsed by seismology, it has continued to be an important and sometimes crucial constraint in a number of exploration areas. In oil exploration the gravity method is particularly applicable in salt provinces, overthrust and foothills belts, underexplored basins, and targets of interest that underlie high-velocity zones. The gravity method is used frequently in mining applications to map subsurface geology and to directly calculate ore reserves for some massive sulfide orebodies. There is also a modest increase in the use of gravity techniques in specialized investigations for shallow targets. Gravimeters have undergone continuous improvement during the past 25 years, particularly in their ability to function in a dynamic environment. This and the advent of global positioning systems (GPS) have led to a marked improvement in the quality of marine gravity and have transformed airborne gravity from a regional technique to a prospect-level exploration tool that is particularly applicable in remote areas or transition zones that are otherwise inaccessible. Recently, moving-platform gravity gradiometers have become available and promise to play an important role in future exploration. Data reduction, filtering, and visualization, together with low-cost, powerful personal computers and color graphics, have transformed the interpretation of gravity data. The state of the art is illustrated with three case histories: 3D modeling of gravity data to map aquifers in the Albuquerque Basin, the use of marine gravity gradiometry combined with 3D seismic data to map salt keels in the Gulf of Mexico, and the use of airborne gravity gradiometry in exploration for kimberlites in Canada.
An Airy-type geophysical experiment was conducted in a 2-km-deep hole in the Greenland ice cap at depths between 213 and 1673 m to test for possible violations of Newton's inverse-square law. An anomalous gravity gradient was observed. We cannot unambiguously attribute it to a breakdown of Newtonian gravity because we have shown that it might be due to unexpected geological features in the rock below the ice.PACS numbers: 04.80,+z, 04.90.+e, 93.30.Kh Some unified field theories 1 raise the possibility that forces exist in nature with ranges on the order of 10 2 -10 5 m and coupling strengths close to that of gravity. If they exist, these new forces would be apparent as violations of Newton's inverse-square law. Recent geophysical measurements in a mine 2 and on a tall television antenna 3 have reported small deviations from the classical law. This paper describes a geophysical experiment to search for possible finite-scale, non-Newtonian gravity over a vertical distance of 213-1673 m in the glacial ice of the Greenland ice cap. The principal reason for the choice of experimental site is that the uniformity of the ice eliminates one of the major sources of uncertainty arising in the first of the earlier studies, 2 namely, the heterogeneity of the rocks through which the mine shaft passes. Our observations were made at Dye 3, Greenland, in a 2033-m-deep borehole, which reached the basement rock. The site is 60 km south of the Arctic Circle, 125 km inland from Greenland's east coast, and at a 2530-m elevation.The Newtonian prediction of the gravity profile in the borehole, based on a density model of the ice and the topographic relief of the bedrock developed from geophysical measurements, was compared with measured values. Differences in gravity g were measured at several depths z and modeled bylaboratory experiments, p, is the ice density, and g r is a correction to the gravity differences based on the attraction of the subice terrain. (The effect of the ice-surface topography is negligible.) Although Eq. (1) is adequate within the uncertainties of our experiment, a more exact expression 4 which accounts for p, ?=^p/(z), /« y(z), and the Earth's ellipticity was used in the calculations. The gravity anomaly Ag is defined as the difference between the modeled gravity g m and the observed gravity in the borehole g obs , Ag=g obs (z)-g m (z) .(2)where / is the theoretical free-air gravity gradient, G is the Newtonian gravitational constant as determined in Now we describe the steps taken to obtain the experimental observations and model calculations given in Table I. The uncertainties in this table include contributions from the measurements themselves and from imperfect knowledge of the ice density and the terrain, with the latter effect dominating. They do not reflect our ignorance of the density inhomogeneities in the underlying rock. This issue, which in the end has the least controlled systematic uncertainty, will be discussed below.Before the measurements were made in Greenland, the borehole gravity meter was...
It is well known that the interpretation of gravity anomaly data suffers from a fundamental nonuniqueness. No matter how complete a gravity data set may be, there are an unlimited number of subsurface density solutions compatible with it. This generally remains true even when the class of mathematically acceptable solutions is limited by imposed constraints based on physical or geologic arguments. The common practice constructing a single solution fitting, or only approximately fitting, the anomaly data is therefore of limited value.
In June 1987 a gravimeter calibration range was set up in southeastern Alaska and the Yukon territory, as part of a geophysical determination of the Newtonian gravitational constant. Absolute gravity measurements were made between the range endpoints using the Institute of Geophysics and Planetary Physics absolute gravity meter. The calibration range spans 171.841±0.014 mGal, with a midpoint g value of 9.81746500 ms−2. Relative gravity meters, including a LaCoste and Romberg borehole gravity meter, were read along this range. A scale factor correction (SFC) for borehole meter 14 was found to be (8.1 ± 1.5) × 10−4, and for meter G‐349 the correction was (−3.3 ± 1.7) × 10−4. The SFC for meter D‐85 has an upper bound of ±1.0 × 10−4.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.