2018
DOI: 10.1002/stc.2309
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty quantification for model parameters and hidden state variables in Bayesian dynamic linear models

Abstract: Summary The quantification of uncertainty associated with the model parameters and the hidden state variables is a key missing aspect for the existing Bayesian dynamic linear models. This paper proposes two methods for carrying out the uncertainty quantification task: (a) the maximum a posteriori with the Laplace approximation procedure (LAP‐P) and (b) the Hamiltonian Monte Carlo procedure (HMC‐P). A comparative study of LAP‐P with HMC‐P is conducted on simulated data as well as real data collected on a dam in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 45 publications
0
5
0
Order By: Relevance
“…Equation ( 20) is the objective function, which aims to maximize the joint probability density of observations (Equation ( 21)). Any optimization algorithm, 52 such as batch gradient descent, can be implemented to search for the optimal solution. The termination criterion is based on the log-likelihood value as follows:…”
Section: Bayesian Dynamic Linear Modelsmentioning
confidence: 99%
“…Equation ( 20) is the objective function, which aims to maximize the joint probability density of observations (Equation ( 21)). Any optimization algorithm, 52 such as batch gradient descent, can be implemented to search for the optimal solution. The termination criterion is based on the log-likelihood value as follows:…”
Section: Bayesian Dynamic Linear Modelsmentioning
confidence: 99%
“…The complete model matrices A , C , Q , and R required for state estimation using the Kalman filter are described in Appendix B. The vector of unknown parameters which need to be estimated using an optimization algorithm 8,17,41 is given by θ=[σwLTσwARσwTPσv], where σwmonospaceLT is the standard deviation of the local trend, σwmonospaceAR is the standard deviation of the AR process, σwmonospaceTP is the standard deviation of the local trend (TP) in TM, and σ v is the standard deviation for the observation error. The initial parameter and the optimized parameter values using Newton‐Raphson 8 optimization technique by maximizing the joint log‐likelihood 42 are bold-italicθ0=false[1063.0235pt0.13.0235pt1063.0235pt1false],bold-italicθ=false[2.16×1063.0235pt0.0923.0235pt6.5×1073.0235pt0.054false]. …”
Section: Applied Examplesmentioning
confidence: 99%
“…The model construction consists in pre‐defining a vector of hidden state variables included in the model for interpreting the data. Examples of model construction are illustrated in several case studies . The warm‐up is employed for approximating the initial distribution for each model parameter.…”
Section: Rao‐blackwellized Particle Filtermentioning
confidence: 99%
“…Examples of model construction are illustrated in several case studies. 41,42 The warm-up is employed for approximating the initial distribution for each model parameter. For this purpose, it can employ either the Markov chain Monte Carlo or Laplace approximation.…”
Section: Framework Architecturementioning
confidence: 99%
See 1 more Smart Citation