Representation of clouds in convection‐permitting models is sensitive to numerical weather prediction (NWP) model parameters that are often very crudely known (for example roughness length). Our goal is to allow for uncertainty in these parameters and estimate them from data using the ensemble Kalman filter (EnKF) approach. However, to deal with difficulties associated with convective‐scale applications, such as non‐Gaussianity and constraints on state and parameter values, modifications to classical EnKF are necessary. In this article, we evaluate and extend several recently developed EnKF‐based algorithms that either incorporate constraints such as mass conservation and positivity of precipitation explicitly or introduce higher order moments on the joint state and parameter estimation problem. We compare their results with the localized EnKF for a common idealized test case. The test case uses perfect model experiments with the one‐dimensional modified shallow‐water model, which was designed to mimic important properties of convection. We use a stochastic dynamical model for parameters, in order to prevent underdispersion in parameter space. To deal with localization for estimation of parameters, we introduce a method called global updating, which is a computationally cheap modification of spatial updating and was proven successful in this context. The sensitivity of the results to the number of ensemble members and localization, as well as observation coverage and frequency, is shown. Although all algorithms are capable of reducing the initial state and parameter errors, it is concluded that mass conservation is important when the localization radius is small and/or the observations are sparse. In addition, accounting for higher order moments in the joint space and parameter estimation problem is beneficial when the ensemble size is large enough or when applied to parameter estimation only.
State-of-the-art ensemble prediction systems usually provide ensembles with only 20–250 members for estimating the uncertainty of the forecast and its spatial and spatiotemporal covariance. Given that the degrees of freedom of atmospheric models are several magnitudes higher, the estimates are therefore substantially affected by sampling errors. For error covariances, spurious correlations lead to random sampling errors, but also a systematic overestimation of the correlation. A common approach to mitigate the impact of sampling errors for data assimilation is to localize correlations. However, this is a challenging task given that physical correlations in the atmosphere can extend over long distances. Besides data assimilation, sampling errors pose an issue for the investigation of spatiotemporal correlations using ensemble sensitivity analysis. Our study evaluates a statistical approach for correcting sampling errors. The applied sampling error correction is a lookup table–based approach and therefore computationally very efficient. We show that this approach substantially improves both the estimates of spatial correlations for data assimilation as well as spatiotemporal correlations for ensemble sensitivity analysis. The evaluation is performed using the first convective-scale 1000-member ensemble simulation for central Europe. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of the sampling error correction for smaller subsets of the full ensemble. The sampling error correction strongly reduced both random and systematic errors for all evaluated variables, ensemble sizes, and lead times.
For numerical discretization schemes, the violation of enstrophy conservation causes a systematic and unrealistic energy cascade towards high wave numbers. The same occurs in data assimilation schemes, where the total energy, enstrophy and divergence could be strongly affected. In this article, we construct an ensemble data assimilation algorithm that conserves mass, total energy and enstrophy. The algorithm uses B‐spline functions for localization and sequential quadratic programming to solve nonlinear constrained minimization problem. Idealized experiments are performed using a 2D shallow‐water model, with selected contraints derived from the nature run. It is found that all experiments exhibit comparable root‐mean‐square errors, with a slight advantage for those that include the conservation constraint on the globally integrated enstrophy. However, the kinetic energy and enstrophy spectra in experiments with the enstrophy constraint are considerably closer to the true spectra, in particular at the smallest resolvable scales. Therefore, imposing conservation of enstrophy within the data assimilation algorithm effectively avoids the spurious energy cascade of the rotational part and thereby successfully suppresses the noise generated by the data assimilation algorithm. The 14 day deterministic free forecast, starting from the initial condition enforced by both total energy and enstrophy constraints, produces the best prediction. The same holds for the ensemble free forecasts.
Abstract. In previous work, it was shown that the preservation of physical properties in the data assimilation framework can significantly reduce forecast errors. Proposed data assimilation methods, such as the quadratic programming ensemble (QPEns) that can impose such constraints on the calculation of the analysis, are computationally more expensive, severely limiting their application to high-dimensional prediction systems as found in Earth sciences. We, therefore, propose using a convolutional neural network (CNN) trained on the difference between the analysis produced by a standard ensemble Kalman filter (EnKF) and the QPEns to correct any violations of imposed constraints. In this paper, we focus on the conservation of mass and show that, in an idealised set-up, the hybrid of a CNN and the EnKF is capable of reducing analysis and background errors to the same level as the QPEns.
Convective-scale data assimilation uses high-resolution numerical weather prediction models and temporally and spatially dense observations of relevant atmospheric variables. In addition, it requires a data assimilation algorithm that is able to provide initial conditions for a state vector of large size with one third or more of its components containing prognostic hydrometeors variables whose non-negativity needs to be preserved. The algorithm also needs to be fast as the state vector requires a high updating frequency in order to catch fast-changing convection. A computationally efficient algorithm for quadratic optimization (QO, or formerly QP) is presented here, which preserves physical properties in order to represent features of the real atmosphere. Crucially for its performance, it exploits the fact that the resulting linear constraints may be disjoint. Numerical results on a simple model designed for testing convective-scale data assimilation show accurate results and promising computational cost. In particular, if constraints on physical quantities are disjoint and their rank is small, further reduction in computational costs can be achieved. K E Y W O R D Sconvective-scale predictions, data assimilation, disjoint linear constraints, quadratic optimization, preservation of non-negativityThis is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.