2011
DOI: 10.1190/1.3548548
|View full text |Cite
|
Sign up to set email alerts
|

Processing gravity gradient data

Abstract: As the demand for high-resolution gravity gradient data increases and surveys are undertaken over larger areas, new challenges for data processing have emerged. In the case of full-tensor gradiometry, the processor is faced with multiple derivative measurements of the gravity field with useful signal content down to a few hundred meters' wavelength. Ideally, all measurement data should be processed together in a joint scheme to exploit the fact that all components derive from a common source. We have investiga… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 79 publications
(35 citation statements)
references
References 23 publications
0
35
0
Order By: Relevance
“…vertical component (g z ) only. In addition, it has interpolative power, which can be exploited to adapt the survey design as the exploration program progresses and also to increase the effective resolution of the data [15,16]. Furthermore, the modeling of gravity data involving their gradients for quantitative interpretation will result in more sharp lateral boundaries.…”
Section: Discussionmentioning
confidence: 99%
“…vertical component (g z ) only. In addition, it has interpolative power, which can be exploited to adapt the survey design as the exploration program progresses and also to increase the effective resolution of the data [15,16]. Furthermore, the modeling of gravity data involving their gradients for quantitative interpretation will result in more sharp lateral boundaries.…”
Section: Discussionmentioning
confidence: 99%
“…We conclude that, at least for gravity and FTG modelling, we cannot achieve a good approximation of the subsurface by simply downsampling an existing and more precise model. The reason is that the result of the downsampling is not an equivalent source of the original model, and this fact, itself, suggests the solution to the issue, i.e., apply the equivalent source method (Dampney ; Barnes and Lumley ). From here on, we would like to clarify that we are using the equivalent source technique in a broader sense with respect to what was proposed in Dampney () or Barnes and Lumley ().…”
Section: Approximation Of the Topographymentioning
confidence: 99%
“…Of these two sources of variable noise levels, the latter is more important because most pre-processing algorithms combine the original component measurements into a single quantity from which the final (processed) component values are then computed. Since the tensor components provide independent but related measurements of the gravity field, it is appropriate that they are combined by calculating the underlying field (or potential) or equivalent density distribution that models the field (Li, 2001;Barnes and Lumley, 2011). As a result, the two main pre-processing methods currently in use are the Fourier transform approach (to compute the field or potential) and the equivalent source method (to compute a density distribution).…”
Section: Parameter Errorsmentioning
confidence: 99%