2022
DOI: 10.1080/10618600.2022.2129662
|View full text |Cite
|
Sign up to set email alerts
|

Vecchia-Approximated Deep Gaussian Processes for Computer Experiments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 61 publications
0
2
0
Order By: Relevance
“…Other techniques for large‐scale GP, such as sparse approximation (Quiñonero‐Candela & Rasmussen, 2005; Sang & Huang, 2012), covariance tapering (Furrer et al, 2006), inducing inputs (Snelson & Ghahramani, 2006), and nearest neighbor GPs (Datta et al, 2016; Finley et al, 2019) (which has been utilized in Cheng et al (2021) for large‐scale calibration), as well as Vecchia‐approximated GPs/deep GPs (Katzfuss & Guinness, 2021; Sauer, Cooper, & Gramacy, 2023), are also worth exploring within the framework of KOH calibration. For a comprehensive review of large‐scale GP, refer to Liu et al (2020).…”
Section: Applications In Diverse Scenariosmentioning
confidence: 99%
“…Other techniques for large‐scale GP, such as sparse approximation (Quiñonero‐Candela & Rasmussen, 2005; Sang & Huang, 2012), covariance tapering (Furrer et al, 2006), inducing inputs (Snelson & Ghahramani, 2006), and nearest neighbor GPs (Datta et al, 2016; Finley et al, 2019) (which has been utilized in Cheng et al (2021) for large‐scale calibration), as well as Vecchia‐approximated GPs/deep GPs (Katzfuss & Guinness, 2021; Sauer, Cooper, & Gramacy, 2023), are also worth exploring within the framework of KOH calibration. For a comprehensive review of large‐scale GP, refer to Liu et al (2020).…”
Section: Applications In Diverse Scenariosmentioning
confidence: 99%
“…If the size of the neighbourhood is n=scriptOfalse(Nafalse), then the cost of each prediction is scriptOfalse(N3afalse), which can be tedious and inefficient when a large number of sequential predictions are required. We briefly note that a similar class of methods, based on Vecchia approximations (Vecchia, 1988), have recently become popular (Katzfuss et al, 2020; Katzfuss & Guinness, 2021; Sauer et al, 2022). These methods generally aim to construct a sparse approximation to the Cholesky factor of the covariance matrix in scriptOfalse(nm3false) time.…”
Section: Introductionmentioning
confidence: 99%