The relative importance between the sensible heat supply from the ocean and latent heating is assessed for the maintenance of near-surface mean baroclinicity in the major storm-track regions, by analyzing steady linear responses of a planetary wave model to individual components of zonally asymmetric thermal forcing taken from a global reanalysis dataset. The model experiments carried out separately for the North Atlantic, North Pacific, and south Indian Oceans indicate that distinct local maxima of near-surface baroclinicity observed along the storm tracks can be reinforced most efficiently as a response to the near-surface sensible heating. The result suggests the particular importance of the differential sensible heat supply from the ocean across an oceanic frontal zone for the efficient restoration of surface baroclinicity, which acts against the relaxing effect by poleward eddy heat transport, setting up conditions favorable for the recurrent development of transient eddies to anchor a storm track. Unlike what has been suggested, the corresponding reinforcement of the near-surface baroclinicity along a storm track as the response to the latent heating due either to cumulus convection or large-scale condensation is found less efficient. As is well known, poleward eddy heat flux convergence acts as the primary contributor to the reinforcement of the surface westerlies, especially in the core of a storm track. In its exit region, a substantial contribution to the reinforcement arises also from a planetary wave response to the sensible heat supply from the ocean. In contrast, the surface wind acceleration as a planetary wave response to the latent heating is found to contribute negatively to the maintenance of the surface westerlies along any of the major storm tracks.
Despite dramatic improvements over the last decades, operational NWP forecasts still occasionally suffer from abrupt drops in their forecast skill. Such forecast skill “dropouts” may occur even in a perfect NWP system because of the stochastic nature of NWP but can also result from flaws in the NWP system. Recent studies have shown that dropouts occur due not to a model’s deficiencies but to misspecified initial conditions, suggesting that they could be mitigated by improving the quality control (QC) system so that the observation-minus-background (O-B) innovations that would degrade a forecast can be detected and rejected. The ensemble forecast sensitivity to observations (EFSO) technique enables for the quantification of how much each observation has improved or degraded the forecast. A recent study has shown that 24-h EFSO can detect detrimental O-B innovations that caused regional forecast skill dropouts and that the forecast can be improved by not assimilating them. Inspired by that success, a new QC method is proposed, termed proactive QC (PQC), that detects detrimental innovations 6 h after the analysis using EFSO and then repeats the analysis and forecast without using them. PQC is implemented and tested on a lower-resolution version of NCEP’s operational global NWP system. It is shown that EFSO is insensitive to the choice of verification and lead time (24 or 6 h) and that PQC likely improves the analysis, as attested to by forecast improvements of up to 5 days and beyond. Strategies for reducing the computational costs and further optimizing the observation rejection criteria are also discussed.
Abstract. To successfully assimilate data from a new observing system, it is necessary to develop appropriate data selection strategies, assimilating only the generally useful data. This development work is usually done by trial and error using observing system experiments (OSEs), which are very time and resource consuming. This study proposes a new, efficient methodology to accelerate the development using ensemble forecast sensitivity to observations (EFSO). First, non-cycled assimilation of the new observation data is conducted to compute EFSO diagnostics for each observation within a large sample. Second, the average EFSO conditionally sampled in terms of various factors is computed. Third, potential data selection criteria are designed based on the non-cycled EFSO statistics, and tested in cycled OSEs to verify the actual assimilation impact. The usefulness of this method is demonstrated with the assimilation of satellite precipitation data. It is shown that the EFSO-based method can efficiently suggest data selection criteria that significantly improve the assimilation results.
Ensemble Kalman filters (EnKF) are empirically known to suffer from insufficient posterior spread and this issue is aggravated when assimilating a large volume of observations. This problem, commonly called analysis underdispersion or analysis overconfidence, has been well recognized, but why it occurs seems to be rather poorly understood. Inspired by the theory of the degrees of freedom for signal, this article investigates this problem by analyzing the trace of the matrix HK, where H and K represent, respectively, the observation operator and the gain matrix. A simple mathematical argument shows that tr HK for EnKF is bounded from above by the ensemble size, which entails that assimilating many more observations than the ensemble size leads automatically to tr HK underestimation, as long as the observations are of accuracy comparable to the background. Since tr HK can be interpreted as the squared spread of the posterior ensemble measured in the normalized observation space, underestimated tr HK implies overconfidence in the analysis spread, which, in a cycled context, requires covariance inflation to be applied. The theory is then extended to cases where covariance localization schemes (either B‐localization or R‐localization) are applied to show how they alleviate the analysis underdispersion. These findings from the mathematical argument are demonstrated with a simple one‐dimensional covariance model. Finally, the findings described above are used to form speculative arguments about how to interpret several puzzling features of the local ensemble transform Kalman filter (LETKF) previously reported in the literature, such as why using fewer observations can lead to better performance, when optimal localization scales tend to occur, and why covariance inflation methods based on the relaxation to prior information approach are particularly successful when observations are distributed inhomogeneously.
In 2011, the National Oceanic and Atmospheric Administration (NOAA) began a cooperative initiative with the academic community to help address a vexing issue that has long been known as a disconnection between the operational and research realms for weather forecasting and data assimilation. The issue is the gap, more exotically referred to as the “valley of death,” between efforts within the broader research community and NOAA’s activities, which are heavily driven by operational constraints. With the stated goals of leveraging research community efforts to benefit NOAA’s mission and offering a path to operations for the latest research activities that support the NOAA mission, satellite data assimilation in particular, this initiative aims to enhance the linkage between NOAA’s operational systems and the research efforts. A critical component is the establishment of an efficient operations-to-research (O2R) environment on the Supercomputer for Satellite Simulations and Data Assimilation Studies (S4). This O2R environment is critical for successful research-to-operations (R2O) transitions because it allows rigorous tracking, implementation, and merging of any changes necessary (to operational software codes, scripts, libraries, etc.) to achieve the scientific enhancement. So far, the S4 O2R environment, with close to 4,700 computing cores (60 TFLOPs) and 1,700-TB disk storage capacity, has been a great success and consequently was recently expanded to significantly increase its computing capacity. The objective of this article is to highlight some of the major achievements and benefits of this O2R approach and some lessons learned, with the ultimate goal of inspiring other O2R/R2O initiatives in other areas and for other applications.
Data assimilation (DA) methods require an estimate of observation error covariance [Formula: see text] as an external parameter that typically is tuned in a subjective manner. To facilitate objective and systematic tuning of [Formula: see text] within the context of ensemble Kalman filtering, this paper introduces a method for estimating how forecast errors would be changed by increasing or decreasing each element of [Formula: see text], without a need for the adjoint of the model and the DA system, by combining the adjoint-based [Formula: see text]-sensitivity diagnostics presented by Daescu previously with the technique employed by Kalnay et al. to derive ensemble forecast sensitivity to observations (EFSO). The proposed method, termed EFSR, is shown to be able to detect and adaptively correct misspecified [Formula: see text] through a series of toy-model experiments using the Lorenz ’96 model. It is then applied to a quasi-operational global DA system of the National Centers for Environmental Prediction to provide guidance on how to tune the [Formula: see text]. A sensitivity experiment in which the prescribed observation error variances for four selected observation types were scaled by 0.9 or 1.1 following the EFSR guidance, however, resulted in forecast improvement that is not statistically significant. This can be explained by the smallness of the perturbation given to the [Formula: see text]. An iterative online approach to improve on this limitation is proposed. Nevertheless, the sensitivity experiment did show that the EFSO impacts from each observation type were increased by the EFSR-guided tuning of [Formula: see text].
We propose a unifying theory for covariance inflation (CI) in the Ensemble Kalman Filter (EnKF) that encompasses all existing CI methods and can explain many open problems in CI. Each CI method is identified with an inflation function that alters analysis perturbations through their singular values. Inflation functions are usually considered as functions of singular values of background or analysis perturbations. However, we have shown that it is more fruitful if inflation functions are viewed as functions of reduction factors of background singular values after assimilation. These factors indeed comprise the spectra of linear transformations between background and analysis perturbations. To be an inflation function, a function has to satisfy three conditions: (a) the functional condition: all reduction factors must increase, (b) the no-observation condition: when no observations are assimilated, analysis perturbations are identical to background perturbations, and (c) the order-preserving condition: inflated analysis singular values must have the same order as background singular values. If the upper-bound condition, that is, inflated analysis error variances must be less than observation error variances, is imposed, the resulting inflation functions are shown to be equivalent to prior inflation functions which are functions of singular values of background perturbations. This condition is necessary if we want to inflate analysis increments in posterior CI. It turns out that the relaxation-to-prior-spread method and the relaxation-to-prior-perturbation method belong to the class of linear inflation functions. In this class, we also have constant inflation functions, multiplicative inflation functions and parameter-varying linear inflation functions. More interesting, the Deterministic EnKF is found to belong to the class of quadratic inflation functions. This quadratic class introduces an elegant form for computing analysis perturbations through the Kalman gain. Higher-order polynomial and non-polynomial forms of inflation functions are less appealing in practice due to high computation cost and difficulty in determining free parameters.
A new grid system on a sphere is proposed which allows for straightforward implementation of both spherical-harmonics-based spectral methods and gridpoint-based multigrid methods. The latitudinal gridpoints in the new grid are equidistant and spectral transforms in the latitudinal direction are performed using Clenshaw-Curtis quadrature. The spectral transforms with this new grid and quadrature are shown to be exact within machine precision provided that the grid truncation is such that there are at least 2N + 1 latitudinal gridpoints for the total truncation wavenumber of N. The new grid and quadrature is implemented and tested on a shallow-water equations model and the hydrostatic dry dynamical core of the global NWP model JMA-GSM. The integration results obtained with the new quadrature are shown to be almost identical to those obtained with the conventional Gaussian quadrature on a Gaussian grid. Only minor code changes are required to adapt any Gaussian-based spectral models to employ the proposed quadrature. KEYWORDS Clenshaw-Curtis quadrature, Fejér quadrature, global spectral model, Legendre transform, multigrid 1 Q J R Meteorol Soc. 2018;144:1382-1397.wileyonlinelibrary.com/journal/qj
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.