We have developed a 2.5D finite-element modeling (FEM) method for marine controlled-source electromagnetic (CSEM) applications in stratified anisotropic media. The main feature of the method is that delta sources are used to solve the governing partial differential equations for cases with and without a resistive target and to obtain the difference of these two solutions as the scattered field from the target. The total field is then the sum of the analytical background field calculated with a 1D modeling method and the difference or scattered field mentioned above. Compared with a conventional direct solution (using delta sources directly in a 2.5D formulation), the new method has smaller near-field error as a result of the source singularity and smaller boundary reflections. The new method does not require a dense mesh in the source region, which thereby reduces the total number of variables to be solved. In this way, the modeling time can be kept within a few minutes for some cases. We show that the maximum relative error of the calculation can be kept within 2% for targets at depths of approximately [Formula: see text]. The method is valid for stratified anisotropic media. The anisotropic modeling examples show that (1) marine CSEM is predominantly sensitive to target vertical resistivity and not to target horizontal resistivity, provided that the targets are thin, horizontal, high-resistivity layers and (2) marine CSEM is sensitive to the horizontal resistivity of the conductive sediments surrounding the target (e.g., the overburden).
The use of controlled source electromagnetics (CSEM) in the marine environment has grown rapidly in the past few years from a simple anomaly fluid-hunting technique used in geologically simple environments to a modeling and inversion based technique applied in structurally and lithologically complex environments . The tool set most commonly available to interpreters includes one-, two-and three-dimensional forward and inverse modeling codes. All previous examples, reported in the literature, of inversion codes applied to marine CSEM data have been cell-based regularized techniques designed to produce the smoothest possible isotropic conductivity model (in two-or three-dimensions) which fits the observed data. We report on the development of a new technique, anisotropic sharp-boundary inversion in which the model is parameterized by two-dimensional interfaces. In this approach anisotropic conductivity can have sharp contrasts across interfaces. Regularization is applied to the smoothness of the interface and the lateral variations of conductivity between interfaces. We demonstrate a work flow that progresses from forward modeling through fast depth migration to smooth cell based inversion, concluding with sharp boundary inversion for the final interpreted conductivity image.
A new method is presented for seismic deghosting of towed streamer data acquired in rough seas. The deghosting scheme combines pressure recordings along one or several cables with an estimate of the vertical pressure gradient (or the vertical component of the particle velocity). The estimation of the vertical pressure gradient requires continuous elevation measurements of the wave height directly above the receivers. The vertical pressure gradient estimate is obtained by spatially weighting the pressure field. Each spatial weight generally is the product of two weight functions. The first is a function of partial derivatives acting solely along the horizontal Cartesian coordinates. It can be implemented by finite-difference or Fourier derivative operations. The second is a function of the vertical Cartesian coordinate and accounts for the varying sea state. This weight can be changed from one receiver to the next, making the deghosting a local process. Integrated with the measured pressure field, the estimate of the vertical pressure gradient also enables other seismic processing opportunities beyond deghosting.
A B S T R A C TSeismic data volumes, which require huge transmission capacities and massive storage media, continue to increase rapidly due to acquisition of 3D and 4D multiple streamer surveys, multicomponent data sets, reprocessing of prestack seismic data, calculation of post-stack seismic data attributes, etc. We consider lossy compression as an important tool for efficient handling of large seismic data sets. We present a 2D lossy seismic data compression algorithm, based on sub-band coding, and we focus on adaptation and optimization of the method for common-offset gathers. The sub-band coding algorithm consists of five stages: first, a preprocessing phase using an automatic gain control to decrease the non-stationary behaviour of seismic data; second, a decorrelation stage using a uniform analysis filter bank to concentrate the energy of seismic data into a minimum number of sub-bands; third, an iterative classification algorithm, based on an estimation of variances of blocks of sub-band samples, to classify the sub-band samples into a fixed number of classes with approximately the same statistics; fourth, a quantization step using a uniform scalar quantizer, which gives an approximation of the sub-band samples to allow for high compression ratios; and fifth, an entropy coding stage using a fixed number of arithmetic encoders matched to the corresponding statistics of the classified and quantized sub-band samples to achieve compression. Decompression basically performs the opposite operations in reverse order. We compare the proposed algorithm with three other seismic data compression algorithms. The high performance of our optimized sub-band coding method is supported by objective and subjective results.
SummaryThe use of discrete wavelet based analysis, feature extraction, denoising, and compression methods have led to extremely interesting developments in the field of seismic data processing. Notwithstanding, discrete wavelets belong to a wider class of filter banks. The use of more general filter banks allows the design of filter coefficients matching the signal's properties. Consequently, general filter banks bring forth the performance of discrete wavelet based seismic data processing techniques. In this paper, we discuss basics of general filter bank theory, and its applications to seismic data compression and denoising. We show that properly designed filter banks are able to outperform discrete wavelets in both instances.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.