Proceedings of the 44th IEEE Conference on Decision and Control
DOI: 10.1109/cdc.2005.1582442
|View full text |Cite
|
Sign up to set email alerts
|

Kalman Filtering by Minimax Criterion with Uncertain Noise Intensity Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…A typical approach is to construct an optimization procedure based on a minimax estimator for the hidden state, whereby one attempts to minimize a maximum expected loss over the space of possible models. See for instance the work of Borisov [6,7], Miller and Pankov [32], Siemenikhin, Lebedev and Platonov [34,35] or Verdú and Poor [37]. By design, such estimators take into account a generally large set of models, even though many of them should be considered to be very implausible, thus often sacrificing filter performance under the most statistically reasonable model.…”
Section: Robust Filtering Via Nonlinear Expectationsmentioning
confidence: 99%
“…A typical approach is to construct an optimization procedure based on a minimax estimator for the hidden state, whereby one attempts to minimize a maximum expected loss over the space of possible models. See for instance the work of Borisov [6,7], Miller and Pankov [32], Siemenikhin, Lebedev and Platonov [34,35] or Verdú and Poor [37]. By design, such estimators take into account a generally large set of models, even though many of them should be considered to be very implausible, thus often sacrificing filter performance under the most statistically reasonable model.…”
Section: Robust Filtering Via Nonlinear Expectationsmentioning
confidence: 99%
“…Uncertaintyrobust filters for such systems were proposed by Borisov [7,8] via minimax-filtering, whereby a best estimate is sought with respect to the worst case scenario, where 'scenarios' here are represented by probability distributions over the space of all possible parameter values. Such minimax procedures are by now classical, designed to find the estimate which minimizes the maximum expected loss over a range of plausible models, an approach which may be traced back at least as far as Wald [33], and has been applied in various settings, principally in those with linear underlying dynamics; see for example Martin and Mintz [28], Miller and Pankov [29], Siemenikhin [30], Siemenikhin, Lebedev and Platonov [31] or Verdú and Poor [32]. Invariably, however, by focusing exclusively on the worst case scenario, such procedures do not necessarily ensure a satisfactory performance under statistically realistic scenarios, and moreover make no attempt to learn the true parameter values, or more generally to evaluate our uncertainty and how it should be updated to reflect new observations.…”
Section: Introductionmentioning
confidence: 99%
“…Such minimax procedures are by now classical, designed to find the estimate which minimizes the maximum expected loss over a range of plausible models, an approach which may be traced back at least as far as Wald [32], and has been applied in various settings, principally in those with linear underlying dynamics; see for example Martin and Mintz [27], Miller and Pankov [28], Siemenikhin [29], Siemenikhin, Lebedev and Platonov [30] or Verdú and Poor [31]. Invariably, however, by focusing exclusively on the worst case scenario, such procedures do not necessarily ensure a satisfactory performance under statistically realistic scenarios, and moreover make no attempt to learn the true parameter values, or more generally to evaluate our uncertainty and how it should be updated to reflect new observations.…”
Section: Introductionmentioning
confidence: 99%