2020
DOI: 10.1137/20m1315968
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Likelihood Estimation and Uncertainty Quantification for Gaussian Process Approximation of Deterministic Functions

Abstract: Despite the ubiquity of the Gaussian process regression model, few theoretical results are available that account for the fact that parameters of the covariance kernel typically need to be estimated from the data set. This article provides one of the first theoretical analyses in the context of Gaussian process regression with a noiseless data set. Specifically, we consider the scenario where the scale parameter of a Sobolev kernel (such as a Mat\' ern kernel) is estimated by maximum likelihood. We show that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(40 citation statements)
references
References 50 publications
1
39
0
Order By: Relevance
“…For the purpose of this exploratory work, values of Z that are orders of magnitude smaller than 1 are interpreted as indicating that the distributional output from the PNM is under-confident, while values that are orders of magnitude greater than 1 indicate that the PNM is over-confident. A PNM that is neither under nor over confident is said to be calibrated (precise definitions of the term "calibrated" can be found in Karvonen et al 2020;Cockayne et al 2021, but the results we present are straight-forward to interpret using the informal approach just described). Our goal in this work is to develop an approximately Bayesian PNM for nonlinear PDEs that is both accurate and calibrated.…”
Section: Experimental Assessmentmentioning
confidence: 78%
“…For the purpose of this exploratory work, values of Z that are orders of magnitude smaller than 1 are interpreted as indicating that the distributional output from the PNM is under-confident, while values that are orders of magnitude greater than 1 indicate that the PNM is over-confident. A PNM that is neither under nor over confident is said to be calibrated (precise definitions of the term "calibrated" can be found in Karvonen et al 2020;Cockayne et al 2021, but the results we present are straight-forward to interpret using the informal approach just described). Our goal in this work is to develop an approximately Bayesian PNM for nonlinear PDEs that is both accurate and calibrated.…”
Section: Experimental Assessmentmentioning
confidence: 78%
“…We use an asterisk " * " to denote that Assumption * 2.1 is a misspecified assumption, and is not true. After the earliest version of this work was submitted, Assumption * 2.1 was also considered by [31]. Under Assumption * 2.1, we incorrectly assume…”
Section: Problem Settings and Summary Of Resultsmentioning
confidence: 99%
“…(2.9) Similar settings have also been considered by [31], where they call σ a scale parameter and consider only estimating σ. In practice, μ is usually imposed as a constant [14], estimated by the sample average if there are replicates on each measurement location [3], or estimated via maximum likelihood estimation [62].…”
Section: W Wangmentioning
confidence: 99%
“…Scaling and other parameters these processes may have are assumed fixed. Despite recent advances in understanding the behaviour of GP hyperparameters and their effect on the convergence of GP approximation (Karvonen et al, 2020;Teckentrup, 2020;Wynne et al, 2021), these results are either not directly applicable in our setting or too generic in that they assume that the parameter estimates remain in some compact sets, which has not been verified for commonly used parameter estimation methods, such as maximum likelihood. As mentioned, finite elements are needed for computation of the induced prior u GP and the associated posterior.…”
Section: Introductionmentioning
confidence: 87%
“…Some numerical examples for the one-dimensional Poisson equation are given in Section 5. The proofs of these results are based on reproducing kernel Hilbert space (RKHS) techniques which are commonly used to analyse approximation properties of GPs (van der Vaart and van Zanten, 2011;Cialenco et al, 2012;Cockayne et al, 2017;Karvonen et al, 2020;Teckentrup, 2020;Wang et al, 2020;Wynne et al, 2021). Our central tool is Theorem 3.7, which describes the RKHS associated to the prior u GP under the assumptions that the RKHS for f GP is a Sobolev space and L is a second-order elliptic differential operator.…”
Section: Contributionsmentioning
confidence: 99%