1991
DOI: 10.1007/978-3-642-48618-0
|View full text |Cite
|
Sign up to set email alerts
|

Interactive System Identification: Prospects and Pitfalls

Abstract: Softcover reprint of the hardcover 1 st edition 1991The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

1995
1995
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(53 citation statements)
references
References 0 publications
0
53
0
Order By: Relevance
“…In an identi"cation context, the concept of grey boxes has been introduced to denote model structures that use some kind of prior information about the system. See, e.g., Bohlin (1991). The term tailor-made model structure has also been used.…”
Section: Don't Estimate What You Already Know!mentioning
confidence: 99%
“…In an identi"cation context, the concept of grey boxes has been introduced to denote model structures that use some kind of prior information about the system. See, e.g., Bohlin (1991). The term tailor-made model structure has also been used.…”
Section: Don't Estimate What You Already Know!mentioning
confidence: 99%
“…The problem of computing estimates of θ based on the information in Z is a standard gray-box system identification problem, see e.g., [5][6][7]. The parameters are typically estimated using the prediction error method, which has been extensively studied, see e.g., [7].…”
Section: Problem Formulationmentioning
confidence: 99%
“…We here propose to use a weighted quadratic cost function V (c, ϕ, C, S) and treate the problem within the standard gray-box framework available from the system identification community [5][6][7]. This approach requires a prediction model, where the IMU sensor data is used to predict camera motion, and a Kalman filter is used to compute the sequence of innovations over the calibration batch of data.…”
Section: Introductionmentioning
confidence: 99%
“…First, we would prefer to compare and select amongst different surrogates, choosing that model which incurs the smallest model prediction error estimate or which is computationally least expensive [5,6]. Second, we would like to adapt to information generated during the construction-validation process; a sequential approach offers clear advantages, permitting the algorithmand the appeals to the expensive S(£) -to terminate when the (or a) model prediction error estimate is sufficiently small.…”
Section: Call Mv(s(p)s(p) P(p)n Es1c2) -'-Elmentioning
confidence: 99%
“…More broadly, the work is founded upon several related streams of inquiry. From system identification (control) theory [3][4][5][6] we borrow the notion of algorithmic logical empiricism, in which available data is systematically incorporated into the model construction and validation processes; from the design of experiments [7] we appreciate the need for sampling heuristics and response surfaces; from statistical prediction rules and artificial neural networks [8-111 we adopt the concept of "construct and validate" -or "train and test" -data partitions; from the theory of machine learning [12,131 we appropriate the "probably approximately correct" framework; from Monte Carlo methods [14] and the classical equivalence of measure and probability [15] we derive our sampling procedures; from nonparametric statistical theory [16] we deduce our statistical error estimates; from scattered-data methodology [171 we derive our model-construction procedures; and from statistical quality-control theory [18,19] we adapt relevant a posteriori reliability concepts. Lastly, our work, in philosophy, is most closely aligned to earlier seminal efforts in statistical simulation surrogates , in which, first, the need for surrogates is motivated, second, the special role of statistical statements is recognized, and third, the idiosyncrasies of (largely deterministic) computer experiments are identified; other "non-surrogate" statistically motivated approaches to the incorporation of expensive simulations into optimization studies [24] are also relevant to our study.…”
Section: Introductionmentioning
confidence: 99%