2009
DOI: 10.5194/acp-9-9471-2009
|View full text |Cite
|
Sign up to set email alerts
|

<i>Est modus in rebus</i>: analytical properties of multi-model ensembles

Abstract: Abstract. In this paper we investigate some basic properties of the multi-model ensemble systems, which can be deduced from a general characteristic of statistical distributions of the ensemble members with the help of mathematical tools. In particular we show how to find optimal linear combination of model results, which minimizes the mean square error both in the case of uncorrelated and correlated models. By proving basic estimations we try to deduce general properties describing multi-model ensemble system… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
76
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 63 publications
(79 citation statements)
references
References 47 publications
3
76
0
Order By: Relevance
“…In this study, we would also like to build upon the research performed in other multi-model ensembles over the years; rather than calculating only the classical model average or median ensemble (mme), we shall also calculate three ensembles based on the findings of Potempski and Galmarini (2009), Riccio et al (2012), Solazzo et al (2012aSolazzo et al ( , b, 2013, Galmarini et al (2013), and Kioutsioukis and Galmarini (2014). We shall therefore refer to the ensemble made by the optimal subset of models that produce the minimum RMSE as mmeS (Solazzo et al, 2012a, b); the ensemble produced by filtering measurements and all model results using the Kolmogorov-Zurbenko decomposition presented earlier and recombining the four components that best compare with the observed components into a new model set as kzFO ; and the optimally weighted combination as mmeW (Potempski and Galmarini, 2009;Kioutsioukis and Galmarini, 2014;Kioutsioukis et al, 2016). Figure 6 shows the effect of the various ensemble treatments for the two groups of models separately and presented as a Taylor diagram.…”
Section: Analysis Of the Ensembles And Building The Hybrid One 41 Enmentioning
confidence: 99%
“…In this study, we would also like to build upon the research performed in other multi-model ensembles over the years; rather than calculating only the classical model average or median ensemble (mme), we shall also calculate three ensembles based on the findings of Potempski and Galmarini (2009), Riccio et al (2012), Solazzo et al (2012aSolazzo et al ( , b, 2013, Galmarini et al (2013), and Kioutsioukis and Galmarini (2014). We shall therefore refer to the ensemble made by the optimal subset of models that produce the minimum RMSE as mmeS (Solazzo et al, 2012a, b); the ensemble produced by filtering measurements and all model results using the Kolmogorov-Zurbenko decomposition presented earlier and recombining the four components that best compare with the observed components into a new model set as kzFO ; and the optimally weighted combination as mmeW (Potempski and Galmarini, 2009;Kioutsioukis and Galmarini, 2014;Kioutsioukis et al, 2016). Figure 6 shows the effect of the various ensemble treatments for the two groups of models separately and presented as a Taylor diagram.…”
Section: Analysis Of the Ensembles And Building The Hybrid One 41 Enmentioning
confidence: 99%
“…The model intercomparison studies have demonstrated that a model ensemble generally provides the best performance in comparison to observations, (e.g. Vautard et al, 2009) compared to the performance of individual models, although this requires that models or model versions are independent (Potempski and Galmarini, 2009). …”
Section: Model Evaluationmentioning
confidence: 99%
“…It has been widely demonstrated (e.g Potempsky and Galmarini, 2009) that when 81 multiple model results are distilled to retain only original and independent 82 contributions (Solazzo et al 2012) and thereafter statistically combined in what is 83 usually called an ensemble, one obtains results that are systematically superior to 84 the performance of the individual models and therefore can provide more accurate 85 and robust assessments or predictions. 86…”
Section: Introduction 80mentioning
confidence: 99%
“…96 Galmarini et al 2004;Tebaldi and Knutti al. (2007); Potempsky and Galmarini, 2009, 97 Solazzo et al 2012Solazzo and Galmarini, 2015), which combine results from 98 different models applied to the same case study, it is customary to consider as 99 members those obtained from a homogeneous group of models. In particular, the 100 scale at which models operate seems to be a discriminant in all such studies that 101 have been performed to date.…”
mentioning
confidence: 99%