2011
DOI: 10.5194/gmdd-4-3599-2011
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The ACCENT-protocol: a framework for benchmarking and model evaluation

Abstract: We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure how to perform a model evaluation. This includes eight steps and examples from global model applications are given for illustration. The first and important step is concerning the purpose of the model application, … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…Stratospheric trace gas observations are often used to produce monthly zonal mean data sets, or climatologies [e.g., von Clarmann et al ., ; Grooß and Russell , ; Hassler et al ., ; Jones et al ., ; Randel and Wu , ]. Monthly zonal mean data products are typically used as prescribed forcing for models [e.g., Cionni et al ., ] and are also useful for comparison with similarly averaged chemistry climate model output [e.g., Grewe et al ., ; SPARC CCMVal , ].…”
Section: Introductionmentioning
confidence: 99%
“…Stratospheric trace gas observations are often used to produce monthly zonal mean data sets, or climatologies [e.g., von Clarmann et al ., ; Grooß and Russell , ; Hassler et al ., ; Jones et al ., ; Randel and Wu , ]. Monthly zonal mean data products are typically used as prescribed forcing for models [e.g., Cionni et al ., ] and are also useful for comparison with similarly averaged chemistry climate model output [e.g., Grewe et al ., ; SPARC CCMVal , ].…”
Section: Introductionmentioning
confidence: 99%
“…1.2 paragraph 2. Another challenge comes from the well-known problem of standardized hydrological benchmarking discussed in Abramowitz (2012) and Grewe et al (2012), as the definition of efficient (from a hydrological simulation point of view) software depends on the proper application of metrics and proper justification at each step.…”
Section: Computational Performancementioning
confidence: 99%
“…For example, modelling studies often simulate a wide range of optical thickness of contrails, which, for example, cannot be detected from satellite (Marquart et al, 2003;Kärcher et al, 2009). For all these reasons, the comparison of contrail properties such as ice water content and optical thickness for a limited number of simulations can just be seen as a sanity check rather than a hard benchmark test (Grewe et al, 2012b). Figure 10a shows observed and simulated ice water content in contrails as a cumulative probability density function.…”
Section: Contrailsmentioning
confidence: 99%