2005
DOI: 10.1002/acs.886
|View full text |Cite
|
Sign up to set email alerts
|

How to exploit external model of data for parameter estimation?

Abstract: Any cooperation in multiple-participant decision making (DM) relies on an exchange of individual knowledge pieces and aims. A general methodology of their rational exploitation without calling for an objective mediator is still missing. Desired methodology is proposed for an important particular case, when a participant, performing Bayesian parameter estimation, is offered a model relating the observable data to their past history.The designed solution is based on the so-called fully probabilistic design (FPD)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 9 publications
0
13
0
Order By: Relevance
“…Generally, some pre-simulations for the given data have indicated that higher values on the diagonal of V 0 (k) result in lower MSE in the logarithmic pattern. Indeed, data normalization is highly preferable for DMA, because if time-series are rescaled to fit between 0 and 1, then setting V 0 (k) = I corresponds to reasonably high volatility [112,116,117]. Finally, it should be noticed that if E t are residuals from modelling the independent variable Y t , and e t are residuals from modelling y t , where Y t = a·y t + b, with a and b being some scaling parameters (which corresponds to normalization), then E t = a·e t .…”
Section: Variance Matrixmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, some pre-simulations for the given data have indicated that higher values on the diagonal of V 0 (k) result in lower MSE in the logarithmic pattern. Indeed, data normalization is highly preferable for DMA, because if time-series are rescaled to fit between 0 and 1, then setting V 0 (k) = I corresponds to reasonably high volatility [112,116,117]. Finally, it should be noticed that if E t are residuals from modelling the independent variable Y t , and e t are residuals from modelling y t , where Y t = a·y t + b, with a and b being some scaling parameters (which corresponds to normalization), then E t = a·e t .…”
Section: Variance Matrixmentioning
confidence: 99%
“…Model 1 consists simply of drivers indicated by the Literature review and indicated in Table 1. However, according to, for example, Alquist et al [117], autoregressive models are very common in the oil price modelling. Therefore, Model 2 is constructed by adding the 1 st lag of WTI to the drivers present in the initial Model 1.…”
Section: Time-varying Parameters Preselectionmentioning
confidence: 99%
“…The distribution of the initial state x 0 should be chosen, taking into account the provided prior knowledge f * . The methodology, proposed in [3], solves the analogous problem of incorporating of external knowledge for the case of parameter estimation. Modifying it for Bayesian state estimation, one can transform the prior (flat) pdf f (x 0 ) into the following one:…”
Section: Prior Knowledge Elicitationmentioning
confidence: 99%
“…Originally such a definition of the knowledge incorporation has been proposed in [3], where detailed explanations can be found. The forms of the function * (x 0 ) and the model Z(d |x 0 ) in (11) depend on the cardinality of the set * , denoted by˚ .…”
Section: Prior Knowledge Elicitationmentioning
confidence: 99%
“…Specifically, porous macromolecular materials have attracted durable attention of the scientific community [ 58 , 63 , 64 , 65 , 66 ] due to their functionality and possibility of mechanical control. Nevertheless, this is quite challenging, for a general task to predict and keep under control [ 67 , 68 , 69 ] the desired properties of the self-assembling materials. Several attempts for investigation of stretched porous polyethylene (PE) filled with LC compounds have been already done [ 56 , 70 , 71 , 72 , 73 ].…”
Section: Introductionmentioning
confidence: 99%