We present a finite‐element algorithm for computing MT responses for 3D conductivity structures. The governing differential equations in the finite‐element method are derived from the T–Ω Helmholtz decomposition of the magnetic field H in Maxwell's equations, in which T is the electric vector potential and Ω is the magnetic scalar potential. The Coulomb gauge condition on T necessary to obtain a unique solution for T is incorporated into the magnetic flux density conservation equation. This decomposition has two important benefits. First, the only unknown variable in the air is the scalar value of Ω. Second, the curl–curl equation describing T is only defined in the earth. By comparison, the system of curl–curl equations for H and the electric field E are singular in the air, where the conductivity σ is zero. Although the use of a small but nonzero value of σ in the air and application of a divergence correction are usually necessary in the E or H formulation, the T–Ω method avoids this necessity. In the finite‐element approximation, T and Ω are represented by the edge‐element and nodal‐element interpolation functions within each brick element, respectively. The validity of this modeling approach is investigated and confirmed by comparing modeling results with those of other numerical techniques for two 3D models.
I present a method for calculating frequency-domain electromagnetic responses caused by a dipole source over a 2-D structure. In modeling controlled-source electromagnetic data, it is usual to separate the electromagnetic field into a primary (background) and a secondary (scattered) field to avoid a source singularity, and only the secondary field caused by anomalous bodies is computed numerically. However, this conventional scheme is not effective for complex structures lacking a simple background structure. The present modeling method uses a pseudo-delta function to distribute the dipole source current, and does not need the separation of the primary and the secondary field. In addition, the method employs an isoparametric finite-element technique to represent realistic topography. Numerical experiments are used to validate the code. Finally, a simulation of a source overprint effect and the response of topography for the long-offset transient electromagnetic and the controlled-source magnetotelluric measurements is presented.
SUMMARY In controlled‐source electromagnetic measurements in the near zone or at low frequencies, the real (in‐phase) frequency‐domain component is dominated by the primary field. However, it is the imaginary (quadrature) component that contains the signal related to a target deeper than the source–receiver separation. In practice, it is difficult to measure the imaginary component because of the dominance of the primary field. In contrast, data acquired in the time domain are more sensitive to the deeper target owing to the absence of the primary field. To estimate the frequency‐domain responses reliably from the time‐domain data, we have developed a Fourier transform algorithm using a least‐squares inversion with a smoothness constraint (smooth spectrum inversion). In implementing the smoothness constraint as a priori information, we estimate the frequency response by maximizing the a posteriori distribution based on Bayes' rule. The adjustment of the weighting between the data misfit and the smoothness constraint is accomplished by minimizing Akaike's Bayesian Information Criterion (ABIC). Tests of the algorithm on synthetic and field data for the long‐offset transient electromagnetic method provide reasonable results. The algorithm can handle time‐domain data with a wide range of delay times, and is effective for analysing noisy data.
Three different-scale electromagnetic (EM) measurements have been performed in the Kujukuri coastal plain, southeast Japan, to investigate the distribution of saline groundwater. The three techniques were audio-frequency magnetotelluric (AMT), transient electromagnetic (TEM), and small loop-loop EM measurements. The resistivity sections estimated from these data sets reveal three independent resistivity distributions extending to different depths. The AMT method reveals a regional-scale resistivity distribution across the plain to a maximum depth of approximately [Formula: see text] and the existence of deep conductive zones, which are inferred to be associated with fossil seawater trapped in a Pleistocene formation. The TEM results show a medium-scale resistivity distribution to depths of approximately [Formula: see text], in which two shallow conductive zones are recognized. It is concluded that these features are caused by present seawater intrusion and high-salinity salt-marsh deposits formed during sporadic marine regressions. The small loop-loop EM method provided a shallow resistivity profile that highlights the conductive salt-marsh deposits and resistive sandy ridges. Although these resistivity sections correspond to different depth ranges, the overlapping portions of the sections are very consistent with one another. These EM methods are useful in detecting and interpreting important resistivity features. Taking the geologic evolution of the coastal plains into consideration is crucial when interpreting resistivity profiles such as these, and our results suggest that the presence of fossil seawater is an important factor controlling resistivity at a variety of depths.
The Yurihara oil and gas field is located on the southern edge of Akita Prefecture, northeastern Japan. In this area, drilling, surface geological surveys and many seismic surveys have been used to investigate the geological structure. Wells drilled into the Nishikurosawa Basalt Group (NBG) of Miocene age found oil and gas reservoirs at depths of 1.5–2 km. Oil and gas are now being produced commercially and further exploration is required in the surrounding areas. However, since the neighbouring areas are covered with young volcanic products from the Chokai volcano, and have a rough topography, the subsurface distribution of the NBG must be investigated using other methods in addition to seismic reflection. According to the well data, the resistivity of the NBG is comparatively higher than that of the overlying sedimentary formations, and therefore the magnetotelluric (MT) method is expected to be useful for the estimation of the distribution of the NBG. An MT survey was conducted along three survey lines in this area. Each line trended east–west, perpendicular to the regional geological strike, and was composed of about 25 measurement sites. Induction vectors evaluated from the magnetic field show that this area has a two‐dimensional structure. The evaluated resistivity sections are in agreement with the log data. In conclusion, we were able to detect resistive layers (the NBG) below conductive layers. The results indicate that the NBG becomes gradually less resistive from north to south. In the centre of the northern line, an uplifted resistive area is interpreted as corresponding to the reservoir. By comparison with a seismic section, we prove the effectiveness of the integration of seismic and MT surveys for the investigation of the morphology and internal structure of the NBG. On other survey lines, the resistive uplifted zones are interpreted as possible prospective areas.
Interpretation of controlled-source electromagnetic (CSEM) data is usually based on 1-D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2-D inversions. We have developed an algorithm to invert frequency-domain vertical magnetic data generated by a grounded-wire source for a 2-D model of the earth-a so-called 2.5-D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left-side source and a right-side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long-offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5-D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1-D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5-D inversion agree well with those of a 2-D inversion of MT data.
A B S T R A C TRegularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data-misfit term in the objective function being minimized. The weighting of the constraint term relative to the data-fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference-operator model -the zeroth-, first-, second-and third-order difference-operator models -suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower-order difference-operator models for inversions of noisy data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.