2020
DOI: 10.5194/gmd-2020-325
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Lossy Checkpoint Compression in Full Waveform Inversion

Abstract: Abstract. This paper proposes a new method that combines check- pointing methods with error-controlled lossy compression for large-scale high-performance Full-Waveform Inversion (FWI), an inverse problem commonly used in geophysical exploration. This combination can signif- icantly reduce data movement, allowing a reduction in run time as well as peak memory. In the Exascale computing era, frequent data transfer (e.g., memory bandwidth, PCIe bandwidth for GPUs, or network) is the performance bottleneck rather … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…While lossy compression means that some of the recovered data are not entirely identical to what was previously compressed, Kukreja et al . (2020) reported 10 to a thousand times compression ratio in an FWI implementation, besides showing no perceivable impact on the overall solution. Notably, the maximum error introduced by the method and the associated compression ratio is up to the user to control.…”
Section: High‐performance Computing Techniquesmentioning
confidence: 98%
“…While lossy compression means that some of the recovered data are not entirely identical to what was previously compressed, Kukreja et al . (2020) reported 10 to a thousand times compression ratio in an FWI implementation, besides showing no perceivable impact on the overall solution. Notably, the maximum error introduced by the method and the associated compression ratio is up to the user to control.…”
Section: High‐performance Computing Techniquesmentioning
confidence: 98%
“…From Equation 4, we can easily estimate the memory imprint of our method compared to conventional FWI. For completeness, we also consider other mainstream low memory methods: optimal checkpointing (Griewank andWalther, 2000, Symes (2007), Kukreja et al (2020)), boundary methods (McMechan, 1983, Mittet (1994, Raknes and Weibull (2016)), and DFT methods (Nihei andLi, 2007, Sirgue et al (2010), Witte et al (2019)). This memory overview generalizes to other wave-equations and imaging conditions easily as our method generalizes to any time-domain adjoint-state method.…”
Section: Memory Estimatesmentioning
confidence: 99%
“…To tackle this memory requirement and to open the way to the use of low-memory accelerators such as GPUs, different methods have been proposed that balance memory usage with computational overhead to reduce the memory footprint. One of the earliest methods in this area is optimal checkpointing (Griewank andWalther, 2000, Symes (2007)), which has recently been extended to include wavefield compression (Kukreja et al, 2020). Given available memory, optimal checkpointing reduces storage needs at the expense of having to recompute the forward wavefield.…”
Section: Introductionmentioning
confidence: 99%
“…This method was initially introduced to tackle memory limitation of CPUs and has been used successfully in 3D seismic applications. To further limit the computational overhead, Kukreja et al [6] recently supplemented this approach by adding on-the-fly compression and decompression of the forward wavefields. In situations where the wave physics is reversible, researchers [7][8][9] have shown that forward wavefields can also be recomputed from boundary values.…”
Section: Introductionmentioning
confidence: 99%
“…Following ideas from randomized linear algebra to estimate the trace of a matrix, Louboutin and Herrmann [19] proposed an approximation of the adjoint-state method that leads to major memory improvements and is relatively easy to implement and supported by theory [20,21], guaranting convergence including bounds on the accuracy. However, unlike other approximate methods, such as on-the-fly Fourier-based [22] or lossy compression-based algorithms [6], the artifacts introduced by the proposed randomized trace estimation are incoherent and appear as Gaussian-like noise, which can be handled easily by sparsity-promoting imaging [22]. While the initial results of the randomized trace estimation on a simple 2D synthetic were encouraging [19], we submit the proposed approximation to additional scrutiny by considering complex imaging examples that involve salt (SEAM model [23]) and anisotropy [24] (BP TTI model).…”
Section: Introductionmentioning
confidence: 99%