2007
DOI: 10.1515/jiip.2007.038
|View full text |Cite
|
Sign up to set email alerts
|

Sensitivity functions and their uses in inverse problems

Abstract: In this note we present a critical review of the some of the positive features as well as some of the shortcomings of the generalized sensitivity functions (GSF) of Thomaseth-Cobelli in comparison to traditional sensitivity functions (TSF). We do this from a computational perspective of ordinary least squares estimation or inverse problems using two illustrative examples: the Verhulst-Pearl logistic growth model and a recently developed agricultural production network model. Because GSF provide information on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
56
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 55 publications
(60 citation statements)
references
References 15 publications
2
56
0
Order By: Relevance
“…Possibly a sensitivity analysis involving partial derivatives may be used in selecting an optimal weight (29 was not included in the model. This is often added to correct for a shift due to upstream AIF sampling (5).…”
Section: Discussionmentioning
confidence: 99%
“…Possibly a sensitivity analysis involving partial derivatives may be used in selecting an optimal weight (29 was not included in the model. This is often added to correct for a shift due to upstream AIF sampling (5).…”
Section: Discussionmentioning
confidence: 99%
“…If N is known we may consider the animal units per system size or the units concentration in the stochastic process C N (t) = X(t)/N with sample paths c N (t). For large systems this approach leads to a deterministic approximation (obtained as solutions to the system rate equation defined below) to the stochastic equation (4), in terms of c(t), the large sample size average over sample paths or trajectories c…”
Section: Stochastic and Deterministic Modelsmentioning
confidence: 99%
“…So at least intuitively, sampling more data points in this region would result in more information about the parametersκ and therefore more accurate estimates for them. By computing the correlation matrix whose elements are given by standard formulas in least squares theory [9,43], one can also observe that strong correlations exist between estimates for κ 3 and κ 4 . In fact, the correlation matrix for these parameters is given by which is in agreement with the dynamics of the curves shown in Figure 7.…”
Section: Sensitivity Analysismentioning
confidence: 99%
“…It is now well known that this matrix and its condition number play a fundamental role in a range of useful ideas such as model comparison [12] (the Akaike Information Criteria, the Takeuchi Information Criteria, etc. ), generalized sensitivity functions [6,7,40] and experimental design (duration, frequency, quality, etc., of observations required to reliably estimate parameters) as well as computation of standard errors and confidence intervals [4,5,7,19]. Brun, et al, [11] and Burth, et al, [13] proposed analyses that use submatrices of the FIM χ T χ. Burth, et al, implement a reduced-order estimation by determining which parameter axes lie closest to the ill-conditioned directions of χ T χ, and then by fixing the associated parameter values at prior estimates throughout an iterative estimation process.…”
Section: Introductionmentioning
confidence: 99%
“…Brun, et al, determined identifiability of parameter combinations using the eigenvalues of submatrices that result from only using some columns of χ T χ. Motivated by these efforts and those on the relationship between ill-conditioning of the FIM and quality of parameter estimates investigated in [5,6,7], we here use the sensitivity matrix χ to develop a methodology to assist one in parameter estimation or inverse problem formulations.…”
Section: Introductionmentioning
confidence: 99%