2018
DOI: 10.1016/j.knosys.2017.12.034
|View full text |Cite
|
Sign up to set email alerts
|

Remarks on multi-output Gaussian process regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
126
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 205 publications
(126 citation statements)
references
References 63 publications
0
126
0
Order By: Relevance
“…The only difference is that unlike the zero-mean GP prior placed on f (x), we explicitly consider a prior mean µ 0 to account for the variability of the noise variance. 4 The kernels k f and k g could be, e.g., the squared exponential (SE) function equipped with automatic relevance determination (ARD) 3 The SVSHGP is implemented based on GPflow [51], which benefits from parallel/GPU speedup and automatic differentiation of Tensorflow [52]. The DVSHGP is built upon the GPML toolbox [53].…”
Section: A Sparse Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…The only difference is that unlike the zero-mean GP prior placed on f (x), we explicitly consider a prior mean µ 0 to account for the variability of the noise variance. 4 The kernels k f and k g could be, e.g., the squared exponential (SE) function equipped with automatic relevance determination (ARD) 3 The SVSHGP is implemented based on GPflow [51], which benefits from parallel/GPU speedup and automatic differentiation of Tensorflow [52]. The DVSHGP is built upon the GPML toolbox [53].…”
Section: A Sparse Approximationmentioning
confidence: 99%
“…The DVSHGP is built upon the GPML toolbox [53]. 4 For f , we can pre-process the data to fulfill the zero-mean assumption. For g, however, it is hard to satisfy the zero-mean assumption, since we have no access to the "noise" data.…”
Section: A Sparse Approximationmentioning
confidence: 99%
“…2) Multi-target Regression: The aim of multi-target regression is to simultaneously predict multiple real-valued output variables for one instance [5], [57]. Here, multiple labels are associated with each instance, represented by a real-valued vector, where the values represent how strongly the instance corresponds to a label.…”
Section: B Problem Definition Of Multi-output Learningmentioning
confidence: 99%
“…Squared Exponential, Periodic, Matérn, and kernel design methods have been proposed [2]. The extension of GPs to multiple sources of data is known as multi-task Gaussian processes (MTGPs) [3]. MTGPs model temporal or spatial relationships among infinitely many random variables, as scalar GPs, but also account for the statistical dependence across different sources of data (or tasks) [3,4,5,6,7,8,9].…”
Section: Introductionmentioning
confidence: 99%
“…The extension of GPs to multiple sources of data is known as multi-task Gaussian processes (MTGPs) [3]. MTGPs model temporal or spatial relationships among infinitely many random variables, as scalar GPs, but also account for the statistical dependence across different sources of data (or tasks) [3,4,5,6,7,8,9]. How to choose an appropriate kernel to jointly model the cross covariance between tasks and auto-covariance within each task is the core aspect of MTGPs design [3,10,11,12,5,13,14].…”
Section: Introductionmentioning
confidence: 99%