Wind observations collected at citizen weather stations (CWSs) could be an invaluable resource in climate and meteorology studies, yet these observations are underutilised because scientists do not have confidence in their quality. These wind speed observations have systematic biases, likely caused by improper instrumentation and station sitings. Such systematic biases introduce spatial inconsistencies that prevent comparison of these stations spatially and limit the possible usage of the data. In this paper, we address these issues by improving and developing new methods for identifying suspect observations and adjusting systematic biases. Our complete quality control and bias adjustment procedure consists of four steps: (a) performing within-station quality control tests to check the plausible range and the temporal consistency of observations, (b) adjusting the systematic bias using empirical quantile mapping, (c) implementing between-station quality control to compare observations from neighbouring stations to identify spatially inconsistent observations, and (d) providing estimates of the true wind when CWSs falsely report zero wind speeds, as a complement to the bias adjustment. We apply these methods to CWSs from the Weather Observation Website (WOW) in the Netherlands, comparing the crowdsourced data with official data, and statistically assessing the improvements in data quality after each step. The results demonstrate that the crowdsourced wind speed data are more comparable with official data after quality control checks and bias adjustment steps. Our quality assessment methods therefore give confidence in CWSs, converting their observations into a usable data product and an invaluable resource for applications in need of additional wind observations.
Abstract. Statistical Postprocessing of medium-range weather forecasts is an important component of modern forecasting systems. Since the beginning of modern data science, numerous new postprocessing methods have been proposed, complementing an already very diverse field. However, one of the questions that frequently arises when considering different methods in the framework of implementing operational postprocessing is the relative performance of the methods for a given specific task. It is particularly challenging to find or construct a common comprehensive dataset that can be used to perform such comparisons. Here, we introduce the first version of EUPPBench, a dataset of time-aligned forecasts and observations, with the aim to facilitate and standardize this process. This dataset is publicly available at https://github.com/EUPP-benchmark/climetlab-eumetnet-postprocessing-benchmark. We provide examples on how to download and use the data, propose a set of evaluation methods, and perform a first benchmark of several methods for the correction of 2-meter temperature forecasts.
Abstract. Statistical postprocessing of medium-range weather forecasts is an important component of modern forecasting systems. Since the beginning of modern data science, numerous new postprocessing methods have been proposed, complementing an already very diverse field. However, one of the questions that frequently arises when considering different methods in the framework of implementing operational postprocessing is the relative performance of the methods for a given specific task. It is particularly challenging to find or construct a common comprehensive dataset that can be used to perform such comparisons. Here, we introduce the first version of EUPPBench (EUMETNET postprocessing benchmark), a dataset of time-aligned forecasts and observations, with the aim to facilitate and standardize this process. This dataset is publicly available at https://github.com/EUPP-benchmark/climetlab-eumetnet-postprocessing-benchmark (31 December 2022) and on Zenodo (https://doi.org/10.5281/zenodo.7429236, Demaeyer, 2022b and https://doi.org/10.5281/zenodo.7708362, Bhend et al., 2023). We provide examples showing how to download and use the data, we propose a set of evaluation methods, and we perform a first benchmark of several methods for the correction of 2 m temperature forecasts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.