2009
DOI: 10.1137/090749761
|View full text |Cite
|
Sign up to set email alerts
|

Dynamical State and Parameter Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
124
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 78 publications
(129 citation statements)
references
References 37 publications
0
124
0
Order By: Relevance
“…which bears some similarity with [1,16]. Here, however, no discretization of the original continuous-time dynamical model is required and ∂ŷ(λ, t i )/∂λ are computable as definite integrals.…”
Section: Korablev Fast Sampling Of Evolving Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…which bears some similarity with [1,16]. Here, however, no discretization of the original continuous-time dynamical model is required and ∂ŷ(λ, t i )/∂λ are computable as definite integrals.…”
Section: Korablev Fast Sampling Of Evolving Systemsmentioning
confidence: 99%
“…This is a standard inverse problem, and many methods for finding solutions to this problem have been developed to date (sensitivity functions [20], splines [6], interval analysis [15], adaptive observers [19], [5], [9], [12], [24], [25], [8] and particle filters and Bayesian inference methods [1]). Despite these methods are based on different mathematical frameworks, they share a common feature: one is generally required to repeatedly find numerical solutions of nonlinear ordinary differential equations (ODEs) over given intervals of time (solve the direct problem).…”
Section: Introductionmentioning
confidence: 99%
“…We then decrease λ further to 1 − 2 δλ; since the parameter guesses have been refined, the observer gain can be 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 5 reduced without increasing the error beyond ε. At each stage in this process, we use the converged result from the previous stage as the initial guess for p. This process is repeated until λ = 0, and equation (9) has morphed back into equation (1). In summary, the homotopy optimization approach follows the path of minimal error as the observer gain is decreased.…”
Section: Problem Statementmentioning
confidence: 99%
“…The optimization problems are usually solved using deterministic methods, which require the solution of differential equations at each optimization step. The solution of these ODEs can be obtained using initial-value methods [10,26], shooting methods [1], or collocation methods [2]. When deterministic approaches like the steepest descent [22], Gauss-Newton [22], and Levenberg-Marquardt [20] algorithms are used in the optimization procedure, it is not uncommon to converge to a local minimum rather than the global minimum [7].…”
mentioning
confidence: 99%
See 1 more Smart Citation