2023
DOI: 10.1007/s40808-023-01712-7
|View full text |Cite
|
Sign up to set email alerts
|

Stability criteria for Bayesian calibration of reservoir sedimentation models

Abstract: Modeling reservoir sedimentation is particularly challenging due to the simultaneous simulation of shallow shores, tributary deltas, and deep waters. The shallow upstream parts of reservoirs, where deltaic avulsion and erosion processes occur, compete with the validity of modeling assumptions used to simulate the deposition of fine sediments in deep waters. We investigate how complex numerical models can be calibrated to accurately predict reservoir sedimentation in the presence of competing model simplificati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 55 publications
0
4
0
Order By: Relevance
“…The problem of too small (too close to zero) probabilities in the framework of active learning is also referred to as curse of dimensionality (Bellman, 1957), which is discussed in detail by Mouris et al. (2023).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The problem of too small (too close to zero) probabilities in the framework of active learning is also referred to as curse of dimensionality (Bellman, 1957), which is discussed in detail by Mouris et al. (2023).…”
Section: Methodsmentioning
confidence: 99%
“…Because the total of posterior probabilities still needed to sum up to 1, the additional dimension would decrease the prior output space to infinitesimally small numbers, which would all be rejected in the rejection sampling step (Oladyshkin et al, 2020;Smith & Gelfand, 1992). The problem of too small (too close to zero) probabilities in the framework of active learning is also referred to as curse of dimensionality (Bellman, 1957), which is discussed in detail by Mouris et al (2023).…”
Section: Test Proceduresmentioning
confidence: 99%
“…For brevity, we will call this multi‐fidelity model comparisons. In said cases, simplifications could require averaging available observations or ignoring subsets and/or types of data (e.g., see Mouris et al., 2023). Multi‐fidelity comparisons can be useful, not only to select the best model, but also to quantify the change in model performance under different model configurations.…”
Section: Introductionmentioning
confidence: 99%
“…The paper, however, does not present a specific application. The approach proposed in Oladyshkin and Nowak (2019) has been applied in active learning techniques for surrogate model generation, which closely resembles optimal experimental design setups (Mouris et al., 2023; Oladyshkin et al., 2020), but not, to the authors' knowledge, for model selection or similarity analysis.…”
Section: Introductionmentioning
confidence: 99%