2023
DOI: 10.1016/j.amc.2023.128032
|View full text |Cite
|
Sign up to set email alerts
|

A singular Woodbury and pseudo-determinant matrix identities and application to Gaussian process regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 33 publications
0
0
0
Order By: Relevance
“…A common approach to alleviate the ill‐conditioning of a matrix is to regularize it, that is, to add a positive nugget to its diagonal. For a GP, the addition of a nugget to the covariance matrix is analogous to having noisy data, 6,34 as can be seen between Equations (6) and (16) for the gradient‐free and gradient‐enhanced cases, respectively. When the nugget is zero, the mean of the posterior for the GP will match the function of interest exactly at all points where it has been evaluated.…”
Section: Previous Methods To Mitigate Ill‐conditioningmentioning
confidence: 99%
See 1 more Smart Citation
“…A common approach to alleviate the ill‐conditioning of a matrix is to regularize it, that is, to add a positive nugget to its diagonal. For a GP, the addition of a nugget to the covariance matrix is analogous to having noisy data, 6,34 as can be seen between Equations (6) and (16) for the gradient‐free and gradient‐enhanced cases, respectively. When the nugget is zero, the mean of the posterior for the GP will match the function of interest exactly at all points where it has been evaluated.…”
Section: Previous Methods To Mitigate Ill‐conditioningmentioning
confidence: 99%
“…The gradient‐free covariance matrix is commonly given by normal∑false(sans-serifX;trueσ^sans-serifK,bold-italicγ,trueσ^0false)=trueσ^sans-serifK2sans-serifKfalse(sans-serifX;bold-italicγfalse)+trueσ^02sans-serifI,$$ \Sigma \left(\mathsf{X};{\hat{\sigma}}_{\mathsf{K}},\boldsymbol{\gamma}, {\hat{\sigma}}_0\right)={{\hat{\sigma}}_{\mathsf{K}}}^2\mathsf{K}\left(\mathsf{X};\boldsymbol{\gamma} \right)+{\hat{\sigma}}_0^2\mathsf{I}, $$ where the hyperparameter trueσ^sans-serifK2$$ {{\hat{\sigma}}_{\mathsf{K}}}^2 $$ is the variance of the stationary residual error and trueσ^02$$ {\hat{\sigma}}_0^2 $$ is a hyperparameter that estimates σ02$$ {\sigma}_0^2 $$, which is the true noise variance 34 . The hyperparameter trueσ^02$$ {\hat{\sigma}}_0^2 $$ is used when the function evaluations are noisy and, in practice, it also serves to regularize normal∑$$ \Sigma $$ in order to reduce its condition number 34 . To separate the need to regularize the covariance matrix from the estimation of the uncertainty of the function evaluations, we use the following notation normal∑false(sans-serifX;trueσ^sans-serifK,bold-italicγ,ηsans-serifK,trueσ^ffalse)=<...…”
Section: Gaussian Processmentioning
confidence: 99%