2017
DOI: 10.1177/1077546317733906
|View full text |Cite
|
Sign up to set email alerts
|

Improved model correlation through optimal parameter ranking using model reduction algorithms: Augmenting engineering judgment

Abstract: As the complexity and scales of dynamic models increase, novel and efficient model correlation methodologies are vital to the development of accurate models. Classically, to correlate a Finite Element Model (FEM) such that it matches a dynamic test, an experienced engineer chooses a small subset of input parameters that are surmised to be crucial, sensitive and/or possibly erroneous. The operator will then use engineering judgment, or a model updating technique to update the selected subset of parameters until… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 36 publications
(40 reference statements)
0
0
0
Order By: Relevance
“…While still in the modeling context, Phoenix et al (2018) use DEIM and Q-DEIM to rank parameters for the purposes of performing model correlation in validating differential equations models. The authors perform a wide array of experiments, forming Sensitivity matrices resulting from discretized differential equations models with different parameter inputs.…”
Section: Application For Subset Selectionmentioning
confidence: 99%
See 2 more Smart Citations
“…While still in the modeling context, Phoenix et al (2018) use DEIM and Q-DEIM to rank parameters for the purposes of performing model correlation in validating differential equations models. The authors perform a wide array of experiments, forming Sensitivity matrices resulting from discretized differential equations models with different parameter inputs.…”
Section: Application For Subset Selectionmentioning
confidence: 99%
“…As with many dimension reduction techniques, the choice for k$$ k $$ (and/or falsek^$$ \hat{k} $$ in the oversampling problem) is not always clear a priori. For instance, Sorensen and Embree (2016) find leverage score rankings move around as k$$ k $$ grows, and Phoenix et al (2018) present results showing a sweet spot for DEIM/Q‐DEIM applied to more moderate Sensitivity matrix sizes rather than larger matrices with additional data—more information does not necessarily imply better outcomes. It is also worth noting that even though DEIM has been shown to perform well in reducing the dataset while capturing members of most classes (both prevalent and rare‐occurring), some reduced sets still contain several data points with the same class label.…”
Section: Where To Go From Herementioning
confidence: 99%
See 1 more Smart Citation