Abstract:Different seismic data compression algorithms have been developed in order to make the storage more efficient, and to reduce both the transmission time and cost. In general, those algorithms have three stages: transformation, quantization and coding. The Wavelet transform is highly used to compress seismic data, due to the capabilities of the Wavelets on representing geophysical events in seismic data. We selected the lifting scheme to implement the Wavelet transform because it reduces both computational and s… Show more
“…Among many transforms, wavelet based approaches have played a dominant role in performing decorrelation of seismic data [7]- [9]. The popularity of the wavelet based coding scheme could be found in its efficient data representation in the transformed domain which easily allows compressed image manipulation, e.g., by utilizing straightforward quality control scheme or progressive image decompression.…”
Section: A Related Work On Lossy Seismic Data Compressionmentioning
M. Radosavljević and D. Vukobratović would like to acknowledge the European Union's Horizon 2020 research and innovation project under Grant Agreement number 856697. ABSTRACT Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, in this work we explore a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. We propose to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, we modify almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. Thus, we tailored a specific codec design which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-to-noise-ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a configuration, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, we proposed an optimized encoder that reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. INDEX TERMS High bit-depth seismic data compression, 3D volumetric seismic data, HEVC.
“…Among many transforms, wavelet based approaches have played a dominant role in performing decorrelation of seismic data [7]- [9]. The popularity of the wavelet based coding scheme could be found in its efficient data representation in the transformed domain which easily allows compressed image manipulation, e.g., by utilizing straightforward quality control scheme or progressive image decompression.…”
Section: A Related Work On Lossy Seismic Data Compressionmentioning
M. Radosavljević and D. Vukobratović would like to acknowledge the European Union's Horizon 2020 research and innovation project under Grant Agreement number 856697. ABSTRACT Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, in this work we explore a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. We propose to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, we modify almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. Thus, we tailored a specific codec design which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-to-noise-ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a configuration, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, we proposed an optimized encoder that reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. INDEX TERMS High bit-depth seismic data compression, 3D volumetric seismic data, HEVC.
“…Indeed, the compressed sensing fields have helped to tackle many difficulties related to seismic data starting from acquisition to full waveform inversion by exploiting the sparse structure of seismic data (Herrmann et al., 2013; Lin & Herrmann, 2013; Mansour et al., 2012). Conventionally, seismic compression algorithms are based on fixed sparse transforms (Averbuch et al., 2001; Duval & Rosten, 2000; Fajardo et al., 2015; Wang et al., 2004; Zheng & Liu, 2012), where the basis functions are analytically predefined and already known by the encoder and decoder, such as discrete cosines, wavelets and others (Elad, 2010; Mallat, 2008). By contrast, other seismic compression algorithms based on learned transforms have recently emerged.…”
In the marine seismic industry, the size of the recorded and processed seismic data is continuously increasing and tends to become very large. Hence, applying compression algorithms specifically designed for seismic data at an early stage of the seismic processing sequence helps to save cost on storage and data transfer. Dictionary learning methods have been shown to provide state‐of‐the‐art results for seismic data compression. These methods capture similar events from the seismic data and store them in a dictionary of atoms that can be used to represent the data in a sparse manner. However, as with conventional compression algorithms, these methods still require the data to be decompressed before a processing or imaging step is carried out. Parabolic dictionary learning is a dictionary learning method where the learned atoms follow a parabolic travel time move out and are characterized by kinematic parameters such as the slope and the curvature. In this paper, we present a novel method where such kinematic parameters are used to allow the dual‐sensor (or two‐components) wavefield separation processing step directly in the dictionary learning compressed domain for 2D seismic data. Based on a synthetic seismic data set, we demonstrate that our method achieves similar results as an industry‐standard FK‐based method for wavefield separation, with the advantage of being robust to spatial aliasing without the need for data preconditioning such as interpolation and reaching a compression rate around 13. Using a field data set of marine seismic acquisition, we observe insignificant differences on a 2D stacked seismic section between the two methods, whereas reaching a compression ratio higher than 15 when our method is used. Such a method could allow full bandwidth data transfer from vessels to onshore processing centres, where the compressed data could be used to reconstruct not only the recorded data sets, but also the up‐ and down‐going parts of the wavefield.
“…Indeed, the compressed sensing fields have helped to tackle many difficulties related to seismic data starting from acquisition to full waveform inversion by exploiting the sparse structure of seismic data (Herrmann et al, 2013;Lin & Herrmann, 2013;Mansour et al, 2012). Conventionally, seismic compression algorithms are based on fixed sparse transforms (Averbuch et al, 2001;Duval & Rosten, 2000;Fajardo et al, 2015;Wang et al, 2004;Zheng & Liu, 2012), where the basis functions are analytically predefined and already known by the encoder and decoder, such as discrete cosines, wavelets and others (Elad, 2010;Mallat, 2008). By contrast, other seismic compression algorithms based on learned transforms have recently emerged.…”
For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.