During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18-30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill-the scale of mesoscale convective features-the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.
During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced a daily 10-member 4-km horizontal resolution ensemble forecast covering approximately three-fourths of the continental United States. Each member used the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) model core, which was initialized at 2100 UTC, ran for 33 h, and resolved convection explicitly. Different initial condition (IC), lateral boundary condition (LBC), and physics perturbations were introduced in 4 of the 10 ensemble members, while the remaining 6 members used identical ICs and LBCs, differing only in terms of microphysics (MP) and planetary boundary layer (PBL) parameterizations. This study focuses on precipitation forecasts from the ensemble.The ensemble forecasts reveal WRF-ARW sensitivity to MP and PBL schemes. For example, over the 7-week experiment, the Mellor-Yamada-Janjić PBL and Ferrier MP parameterizations were associated with relatively high precipitation totals, while members configured with the Thompson MP or Yonsei University PBL scheme produced comparatively less precipitation. Additionally, different approaches for generating probabilistic ensemble guidance are explored. Specifically, a ''neighborhood'' approach is described and shown to considerably enhance the skill of probabilistic forecasts for precipitation when combined with a traditional technique of producing ensemble probability fields.
Small-scale (less than ϳ15 km) precipitation variability significantly affects the hydrologic response of a basin and the accurate estimation of water and energy fluxes through coupled land-atmosphere modeling schemes. It also affects the radiative transfer through precipitating clouds and thus rainfall estimation from microwave sensors. Because both land-atmosphere and cloud-radiation interactions are nonlinear and occur over a broad range of scales (from a few centimeters to several kilometers), it is important that, over these scales, cloudresolving numerical models realistically reproduce the observed precipitation variability. This issue is examined herein by using a suite of multiscale statistical methods to compare the scale dependence of precipitation variability of a numerically simulated convective storm with that observed by radar. In particular, Fourier spectrum, structure function, and moment-scale analyses are used to show that, although the variability of modeled precipitation agrees with that observed for scales larger than approximately 5 times the model resolution, the model shows a falloff in variability at smaller scales. Thus, depending upon the smallest scale at which variability is considered to be important for a specific application, one has to resort either to very high resolution model runs (resolutions 5 times higher than the scale of interest) or to stochastic methods that can introduce the missing small-scale variability. The latter involve upscaling the model output to a scale approximately 5 times the model resolution and then stochastically downscaling it to smaller scales. The results of multiscale analyses, such as those presented herein, are key to the implementation of such stochastic downscaling methodologies.
Convection-allowing configurations of the Weather Research and Forecast (WRF) model were evaluated during the 2004 Storm Prediction Center–National Severe Storms Laboratory Spring Program in a simulated severe weather forecasting environment. The utility of the WRF forecasts was assessed in two different ways. First, WRF output was used in the preparation of daily experimental human forecasts for severe weather. These forecasts were compared with corresponding predictions made without access to WRF data to provide a measure of the impact of the experimental data on the human decision-making process. Second, WRF output was compared directly with output from current operational forecast models. Results indicate that human forecasts showed a small, but measurable, improvement when forecasters had access to the high-resolution WRF output and, in the mean, the WRF output received higher ratings than the operational Eta Model on subjective performance measures related to convective initiation, evolution, and mode. The results suggest that convection-allowing models have the potential to provide a value-added benefit to the traditional guidance package used by severe weather forecasters.
During the 2007 NOAA Hazardous Weather Testbed (HWT) Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced convection-allowing forecasts from a single deterministic 2-km model and a 10-member 4-km-resolution ensemble. In this study, the 2-km deterministic output was compared with forecasts from the 4-km ensemble control member. Other than the difference in horizontal resolution, the two sets of forecasts featured identical Advanced Research Weather Research and Forecasting model (ARW-WRF) configurations, including vertical resolution, forecast domain, initial and lateral boundary conditions, and physical parameterizations. Therefore, forecast disparities were attributed solely to differences in horizontal grid spacing. This study is a follow-up to similar work that was based on results from the 2005 Spring Experiment. Unlike the 2005 experiment, however, model configurations were more rigorously controlled in the present study, providing a more robust dataset and a cleaner isolation of the dependence on horizontal resolution. Additionally, in this study, the 2- and 4-km outputs were compared with 12-km forecasts from the North American Mesoscale (NAM) model. Model forecasts were analyzed using objective verification of mean hourly precipitation and visual comparison of individual events, primarily during the 21- to 33-h forecast period to examine the utility of the models as next-day guidance. On average, both the 2- and 4-km model forecasts showed substantial improvement over the 12-km NAM. However, although the 2-km forecasts produced more-detailed structures on the smallest resolvable scales, the patterns of convective initiation, evolution, and organization were remarkably similar to the 4-km output. Moreover, on average, metrics such as equitable threat score, frequency bias, and fractions skill score revealed no statistical improvement of the 2-km forecasts compared to the 4-km forecasts. These results, based on the 2007 dataset, corroborate previous findings, suggesting that decreasing horizontal grid spacing from 4 to 2 km provides little added value as next-day guidance for severe convective storm and heavy rain forecasters in the United States.
Test beds have become an integral part of the weather enterprise, bridging research and forecast services by transitioning innovative tools and tested methods that impact forecasts and forecast users. O ver roughly the last decade, a variety of "test beds" have come into existence focused on high-impact weather and the core tools of meteorology-observations, models, and fundamental understanding of the underlying physical processes. They have entered the proverbial "valley of death" between research and forecast operations (NAS 2000), Develop and introduce new ideas, data, etc. Input Revise and iterate Experiment and demonstrate End testing Output Test and refine loop V Assess impacts and evaluate and have survived. This paper provides a brief background on how this happened; summarizes test bed origins, methods, and selected accomplishments; and provides a perspective on the future of test beds in our field. Dabbert et al. (2005) provides a useful description of test beds from early in their development and Fig.
The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3-6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.