2017
DOI: 10.48550/arxiv.1709.08471
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bayesian Filtering for ODEs with Bounded Derivatives

Abstract: Recently there has been increasing interest in probabilistic solvers for ordinary differential equations (ODEs) that return full probability measures, instead of point estimates, over the solution and can incorporate uncertainty over the ODE at hand, e.g. if the vector field or the initial value is only approximately known or evaluable. The ODE filter proposed in [9, 16] models the solution of the ODE by a Gauss-Markov process which serves as a prior in the sense of Bayesian statistics. While previous work emp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…after the sequence of error covariance matrices has converged. These results neither cover filters with the integrated Ornstein-Uhlenbeck process (IOUP) prior [20] nor non-zero noise models on evaluations of f .…”
Section: Introductionmentioning
confidence: 94%
See 2 more Smart Citations
“…after the sequence of error covariance matrices has converged. These results neither cover filters with the integrated Ornstein-Uhlenbeck process (IOUP) prior [20] nor non-zero noise models on evaluations of f .…”
Section: Introductionmentioning
confidence: 94%
“…A global analysis for IOUP is therefore more complicated than for IBM: Recall from (2.6) that, for q = 1, the mean prediction for x((n + 1)h) is m −,(0) ((n + 1)h) m −, (1) (1) (nh) , (5.5) which pulls both m −,(0) and m −, (1) towards zero (or some other prior mean) compared to its Taylor-expansion prediction for θ = 0. While this is useful for ODEs converging to zero, such as ẋ = −x, it is problematic for diverging ODEs, such as ẋ = x [20]. As shown in Theorem 5.2, this effect is asymptotically negligible for local convergence, but it might matter globally and, therefore, might necessitate stronger assumptions on f than Assumption 1, such as a bound on f ∞ which would globally bound {y(nh); n = 0, .…”
Section: Theorem 52 (Local Truncation Error)mentioning
confidence: 99%
See 1 more Smart Citation
“…The former class includes classical ODE solvers that are stochastically perturbed (Abdulle and Garegnani, 2020, Conrad et al, 2017, Lie et al, 2019, Teymur et al, 2018, solvers that approximately sample from a Bayesian inference problem (Tronarp et al, 2019b), and solvers that perform Gaussian process regression on stochastically generated data (Chkrebtii et al, 2016). On the other hand, deterministic solvers formulate the problem as a Gaussian process regression problem, either with a data generation mechanism (Hennig and Hauberg, 2014, Kersting and Hennig, 2016, Magnani et al, 2017, Schober et al, 2014, Skilling, 1992 or by attempting to constrain the estimate to solve the ODE at each point on the mesh (John et al, 2019, Tronarp et al, 2019b. For computational reasons it is fruitful to select the Gaussian process prior to be of Markov type (Kersting and Hennig, 2016, Magnani et al, 2017, Tronarp et al, 2019b, as this reduces cost of inference from O(N 3 ) to O(N ) Särkkä, 2010, Särkkä et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, deterministic solvers formulate the problem as a Gaussian process regression problem, either with a data generation mechanism (Hennig and Hauberg, 2014, Kersting and Hennig, 2016, Magnani et al, 2017, Schober et al, 2014, Skilling, 1992 or by attempting to constrain the estimate to solve the ODE at each point on the mesh (John et al, 2019, Tronarp et al, 2019b. For computational reasons it is fruitful to select the Gaussian process prior to be of Markov type (Kersting and Hennig, 2016, Magnani et al, 2017, Tronarp et al, 2019b, as this reduces cost of inference from O(N 3 ) to O(N ) Särkkä, 2010, Särkkä et al, 2013). Because of the connection between inference with Gauss-Markov processes priors and spline interpolation (Kimeldorf and Wahba, 1970, Sidhu and Weinert, 1979, Weinert and Kailath, 1974, the Gaussian process regression approaches are intimately connected with the spline approach to ODEs (Schumaker, 1982, Wahba, 1973.…”
Section: Introductionmentioning
confidence: 99%