2014
DOI: 10.5194/gmd-7-1247-2014
|View full text |Cite
|
Sign up to set email alerts
|

Root mean square error (RMSE) or mean absolute error (MAE)? – Arguments against avoiding RMSE in the literature

Abstract: Abstract. Both the root mean square error (RMSE) and the mean absolute error (MAE) are regularly employed in model evaluation studies. Willmott and Matsuura (2005) have suggested that the RMSE is not a good indicator of average model performance and might be a misleading indicator of average error, and thus the MAE would be a better metric for that purpose. While some concerns over using RMSE raised by Willmott and Matsuura (2005) and Willmott et al. (2009) are valid, the proposed avoidance of RMSE in favor of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
1,547
0
58

Year Published

2016
2016
2018
2018

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 3,853 publications
(1,914 citation statements)
references
References 10 publications
(13 reference statements)
9
1,547
0
58
Order By: Relevance
“…Although WEW increases the wind and the SWH underestimation, it overall improves the SWH RMSE by approximately 7 % against buoys data and by 11 % against remotely sensed data. In contrast to the bias scores, RMSE penalizes the variance between in situ or remotely sensed data and the simulations implying a deterioration of the RMSE in CTRL run (Chai and Draxler, 2014). Similar RMSE improvements by the coupled systems have been also confirmed in the relevant literature (e.g., Lionello et al, 2003;Renault et al, 2012).…”
Section: System Configurationsupporting
confidence: 64%
“…Although WEW increases the wind and the SWH underestimation, it overall improves the SWH RMSE by approximately 7 % against buoys data and by 11 % against remotely sensed data. In contrast to the bias scores, RMSE penalizes the variance between in situ or remotely sensed data and the simulations implying a deterioration of the RMSE in CTRL run (Chai and Draxler, 2014). Similar RMSE improvements by the coupled systems have been also confirmed in the relevant literature (e.g., Lionello et al, 2003;Renault et al, 2012).…”
Section: System Configurationsupporting
confidence: 64%
“…R 2 is a number between 0 and 1 that indicates the part of the variance in the data that are explained by the model, so that an R 2 value of 1 indicates that the model fully describes the variance in the data, while an R 2 value of 0 indicates that the model fails in explaining anything of the data presented. The MAE and RMSE are other well-established metrics that can be used to quantify the quality of a model (Willmott, 1982;Chai and Draxler, 2014), and that directly measure the discrepancy between model predictions and experimental data.…”
Section: Introductionmentioning
confidence: 99%
“…(32) are also used to assess the errors in the estimates of the derivatives. Since the SPH errors are expected to have a normal rather than a uniform distribution, the RMSE will provide a better representation of the error distribution than other statistical metrics [3]. Compared to the mean absolute error (MAE), which is more closely related to the L 1 -norm, the RMSE gives a higher weighting toward large errors in the sample than the mean average error and therefore it is superior at revealing model performance differences.…”
Section: Numerical Analysismentioning
confidence: 99%