First International Meeting for Applied Geoscience &Amp; Energy Expanded Abstracts 2021
DOI: 10.1190/segam2021-3581836.1
|View full text |Cite
|
Sign up to set email alerts
|

Learning by example: Fast reliability-aware seismic imaging with normalizing flows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Typically these samples are obtained by applying a series of learned nonlinear functions to random realizations from a canonical distribution. Early work on variational inference [26,30,31,44,[113][114][115][116][117][118] shows encouraging results, which opens enticing new perspectives on uncertainty quantification in the field of wave-equation based inversion.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Typically these samples are obtained by applying a series of learned nonlinear functions to random realizations from a canonical distribution. Early work on variational inference [26,30,31,44,[113][114][115][116][117][118] shows encouraging results, which opens enticing new perspectives on uncertainty quantification in the field of wave-equation based inversion.…”
Section: Discussionmentioning
confidence: 99%
“…While effective in controlled settings, handcrafted priors might introduce unwanted bias to the solution. Recent deep-learning based approaches [20][21][22][23][24][25][26][27][28][29][30][31], on the other hand, learn a prior distribution from available data 1 . While certainly providing a better description of the available prior information when compared to generic handcrafted priors, they may affect the results more seriously when out-of-distribution data is considered, e.g., when the training data is not fully representative of a given scenario.…”
Section: Introductionmentioning
confidence: 99%
“…This amounts to feeding the latent code associated with observed data, i.e., T w1 (y), and Gaussian samples z ∼ p z (z) into the inverse network, T −1 w2 . These samples may be used for Bayesian inference if we have an ideal training dataset [9,[15][16][17]. However, such an assumption is rarely correct in geophysical applications due to Earth's strong heterogeneity [18][19][20], which highlights the importance of devising formulations that are robust to changes in data distribution during inference.…”
Section: Amortized Variational Inferencementioning
confidence: 99%
“…To achieve this, following Orozco et al [5], we train a conditional normalizing flow [NF,6] to capture the conditional distribution of the unknown, given data, i.e., the posterior distribution. The training involves minimizing an amortized variational inference objective [6][7][8][9][10] using existing training pairs in the form of low-fidelity data and model pairs. After training, we are able to capture the low-fidelity posterior distribution for previously unseen seismic data.…”
Section: Introductionmentioning
confidence: 99%
“…This linearized imaging problem is challenged by the computationally expensive forward operator as well as presence of measurements noise, linearization errors, modeling errors, and the nontrivial nullspace of the linearized forward Born modeling operator [1][2][3]. These challenges highlight the importance of uncertainty quantification (UQ) in seismic imaging, where instead of finding one seismic image estimate, a distribution of seismic images is obtained that explains the observed data [4], consequently reducing the risk of data overfit and enabling UQ [5][6][7][8][9][10][11][12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%