2009
DOI: 10.1016/j.neucom.2008.09.002
|View full text |Cite
|
Sign up to set email alerts
|

Bagging for Gaussian process regression

Abstract: This paper proposes the application of bagging to obtain more robust and accurate predictions using Gaussian process regression models. The training data is re-sampled using the bootstrap method to form several training sets, from which multiple Gaussian process models are developed and combined through weighting to provide predictions. A number of weighting methods for model combination are discussed, including the simple averaging rule and the weighted averaging rules. We propose to weight the models by the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
115
0
2

Year Published

2010
2010
2020
2020

Publication Types

Select...
8
1
1

Relationship

3
7

Authors

Journals

citations
Cited by 136 publications
(117 citation statements)
references
References 23 publications
0
115
0
2
Order By: Relevance
“…Empirical comparative studies have confirmed the outstanding performance of Gaussian process regression with respect to other non-linear models [11,12,13]. As a result, Gaussian process models have been widely applied to various problems in statistics and engineering [14,11,15,16,17,18,13].…”
Section: Introductionmentioning
confidence: 85%
“…Empirical comparative studies have confirmed the outstanding performance of Gaussian process regression with respect to other non-linear models [11,12,13]. As a result, Gaussian process models have been widely applied to various problems in statistics and engineering [14,11,15,16,17,18,13].…”
Section: Introductionmentioning
confidence: 85%
“…Traditionally, each subset of data learns a different model from another, this is done to increase the expressiveness in the model [11]. The final predictions are then made by combining the predictions of local experts [5].…”
Section: Distributed Inference On Multi-output Gpmentioning
confidence: 99%
“…GP has the computational problem due to an unfavorable cube scaling (O(N 3)) during training, where, N is the number of training data. In recent years, many methods have been proposed to address this problem: sparse GP approximation [24][20] [12], localized regression [7] [17]. In our work, we describe a clustering regression framework in order to bring the scaling down.…”
Section: Introductionmentioning
confidence: 99%