When constructing a Bayesian Machine Learning model, we might be faced with multiple different prior distributions and thus are required to properly consider them in a sensible manner in our model. While this situation is reasonably well explored for classical Bayesian Statistics, it appears useful to develop a corresponding method for complex Machine Learning problems. Given their underlying Bayesian framework and their widespread popularity, Gaussian Processes are a good candidate to tackle this task. We therefore extend the idea of Mixture models for Gaussian Process regression in order to work with multiple prior beliefs at once -both a analytical regression formula and a Sparse Variational approach are considered. In addition, we consider the usage of our approach to additionally account for the problem of prior misspecification in functional regression problems.Keywords Gaussian Processes • Prior Pooling • Mixture Models Such challenges are particularly concerning when a GP prior distribution places zero probability on the target function, a problem known as prior misspecification. In this case, the posterior distribution cannot converge to the true, underlying function either. While, for example, [2] show that a misspecified model still converges to a reasonable posterior in the KL-sense, there is obviously no guarantee that the resulting posterior will be of any practical use.