2009
DOI: 10.1515/jiip.2009.034
|View full text |Cite
|
Sign up to set email alerts
|

A sensitivity matrix based methodology for inverse problem formulation

Abstract: We propose an algorithm to select parameter subset combinations that can be estimated using an ordinary least-squares (OLS) inverse problem formulation with a given data set. First, the algorithm selects the parameter combinations that correspond to sensitivity matrices with full rank. Second, the algorithm involves uncertainty quantification by using the inverse of the Fisher Information Matrix. Nominal values of parameters are used to construct synthetic data sets, and explore the effects of removing certain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
114
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(122 citation statements)
references
References 34 publications
(64 reference statements)
0
114
0
Order By: Relevance
“…The underlying problem of singularity of the χ T χ matrix is quite common for inverse problems with nonlinear models. A method to handle singularity of the Fisher Information Matrix (χ T χ) as a result of poor parameter identifiability and estimability has recently been introduced in [5]. This algorithm determines which parameter axes lie closest to the ill-conditioned directions of χ T χ and then implements a reduced-order estimation by fixing these associated parameter values at prior estimates.…”
Section: Discussionmentioning
confidence: 99%
“…The underlying problem of singularity of the χ T χ matrix is quite common for inverse problems with nonlinear models. A method to handle singularity of the Fisher Information Matrix (χ T χ) as a result of poor parameter identifiability and estimability has recently been introduced in [5]. This algorithm determines which parameter axes lie closest to the ill-conditioned directions of χ T χ and then implements a reduced-order estimation by fixing these associated parameter values at prior estimates.…”
Section: Discussionmentioning
confidence: 99%
“…One example of this would be the output sensitivity matrix S(v) = Cx (v), which is used in first order sensitivity analysis [10]. This matrix is also used at each iteration of the optimization method to compute the descent direction when a Gauss-Newton type algorithm is used [8]).…”
Section: Adjoint Approachmentioning
confidence: 99%
“…Once again, the difference between the two approaches lies in the cascades (10) and (12): in the direct approach we have N differential equations with p-column matrices, whereas in the direct approach the left-hand side of (12) contains 1-column vectors. Hence, at the very least, we should expect a gain equivalent to the gain obtained in the stationary case.…”
Section: Computational Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…called subset selection algorithm (SSA) is proposed that ranks the parameter by two properties, α and κ [9]. The parameter α is correlated to the size of the confidence regions for a parameter set and κ is a measure of how well-conditioned the parameter Jacobean for a parameter set is.…”
Section: Introductionmentioning
confidence: 99%