2021
DOI: 10.1016/j.nucengdes.2021.111230
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian framework of inverse uncertainty quantification with principal component analysis and Kriging for the reliability analysis of passive safety systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 38 publications
1
12
0
Order By: Relevance
“…However, linear dimensionality reduction techniques like PCA cannot deal with complex (real-world) non-linear data [29]. Furthermore, for the specific case of interest here, BE code results may be affected by higher noise (maybe due to numerics or correlation errors [35]) in comparison with the experimental data in relation to the power exchanged by the HX [47] and typically require pre-processing by the analyst (e.g., filtering of the available raw data) [35], which may add a bias. To overcome the limitations of PCA when dealing with time series with small signal-to-noise ratio values, in this work, we explore the use of Autoencoders (AEs) for dimensionality reduction.…”
Section: List Of Symbolsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, linear dimensionality reduction techniques like PCA cannot deal with complex (real-world) non-linear data [29]. Furthermore, for the specific case of interest here, BE code results may be affected by higher noise (maybe due to numerics or correlation errors [35]) in comparison with the experimental data in relation to the power exchanged by the HX [47] and typically require pre-processing by the analyst (e.g., filtering of the available raw data) [35], which may add a bias. To overcome the limitations of PCA when dealing with time series with small signal-to-noise ratio values, in this work, we explore the use of Autoencoders (AEs) for dimensionality reduction.…”
Section: List Of Symbolsmentioning
confidence: 99%
“…It is worth mentioning that the values reported in Table 1 are rescaled factors (i.e., all the parameters have been normalized with respect to their prior nominal values). For more details about the description of the parameters and the selection of the prior ranges, please refer to [35].…”
Section: Case Studymentioning
confidence: 99%
“…Instead, feature extraction aims at identifying a set of "new" features (i.e., new input parameters or variables), generated by transformations of the initial ones (in other words, generating a new, lower-dimensional input subspace as a linear or nonlinear function of the original one) [33]. Some of the most effective and widely used feature extraction techniques are Principal Component Analysis (PCA) ( [35,43]; [61,91]), the Active Subspace Method (ASM) [18,26] and AutoEncoders (AEs) [37,58,70,90]. Finally, Sensitivity Analysis (SA) methods have the same objective as feature selection, but they achieve it by ranking the input parameters and variables according to their influence on the outputs of the model [12,76,81].…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…In a series of papers [87] [88] [89], Liu and his colleagues applied the MBA method to calibrate a two-fluid model-based multiphase computational fluid dynamics with high-resolution experimental data. Roma et al [90] performed IUQ of RELAP5-3D model for the reliability analysis of passive safety systems, using a simplified version of the MBA method by ignoring the model bias term.…”
Section: Marginalization Is Not Neededmentioning
confidence: 99%