Led by NOAA’s Storm Prediction Center and National Severe Storms Laboratory, annual spring forecasting experiments (SFEs) in the Hazardous Weather Testbed test and evaluate cutting-edge technologies and concepts for improving severe weather prediction through intensive real-time forecasting and evaluation activities. Experimental forecast guidance is provided through collaborations with several U.S. government and academic institutions, as well as the Met Office. The purpose of this article is to summarize activities, insights, and preliminary findings from recent SFEs, emphasizing SFE 2015. Several innovative aspects of recent experiments are discussed, including the 1) use of convection-allowing model (CAM) ensembles with advanced ensemble data assimilation, 2) generation of severe weather outlooks valid at time periods shorter than those issued operationally (e.g., 1–4 h), 3) use of CAMs to issue outlooks beyond the day 1 period, 4) increased interaction through software allowing participants to create individual severe weather outlooks, and 5) tests of newly developed storm-attribute-based diagnostics for predicting tornadoes and hail size. Additionally, plans for future experiments will be discussed, including the creation of a Community Leveraged Unified Ensemble (CLUE) system, which will test various strategies for CAM ensemble design using carefully designed sets of ensemble members contributed by different agencies to drive evidence-based decision-making for near-future operational systems.
Hourly maximum fields of simulated storm diagnostics from experimental versions of convection-permitting models (CPMs) provide valuable information regarding severe weather potential. While past studies have focused on predicting any type of severe weather, this study uses a CPM-based Weather Research and Forecasting (WRF) Model ensemble initialized daily at the National Severe Storms Laboratory (NSSL) to derive tornado probabilities using a combination of simulated storm diagnostics and environmental parameters. Daily probabilistic tornado forecasts are developed from the NSSL-WRF ensemble using updraft helicity (UH) as a tornado proxy. The UH fields are combined with simulated environmental fields such as lifted condensation level (LCL) height, most unstable and surface-based CAPE (MUCAPE and SBCAPE, respectively), and multifield severe weather parameters such as the significant tornado parameter (STP). Varying thresholds of 2–5-km updraft helicity were tested with differing values of σ in the Gaussian smoother that was used to derive forecast probabilities, as well as different environmental information, with the aim of maximizing both forecast skill and reliability. The addition of environmental information improved the reliability and the critical success index (CSI) while slightly degrading the area under the receiver operating characteristic (ROC) curve across all UH thresholds and σ values. The probabilities accurately reflected the location of tornado reports, and three case studies demonstrate value to forecasters. Based on initial tests, four sets of tornado probabilities were chosen for evaluation by participants in the 2015 National Oceanic and Atmospheric Administration’s Hazardous Weather Testbed Spring Forecasting Experiment from 4 May to 5 June 2015. Participants found the probabilities useful and noted an overforecasting tendency.
The 2016–18 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments (SFE) featured the Community Leveraged Unified Ensemble (CLUE), a coordinated convection-allowing model (CAM) ensemble framework designed to provide empirical guidance for development of operational CAM systems. The 2017 CLUE included 81 members that all used 3-km horizontal grid spacing over the CONUS, enabling direct comparison of forecasts generated using different dynamical cores, physics schemes, and initialization procedures. This study uses forecasts from several of the 2017 CLUE members and one operational model to evaluate and compare CAM representation and next-day prediction of thunderstorms. The analysis utilizes existing techniques and novel, object-based techniques that distill important information about modeled and observed storms from many cases. The National Severe Storms Laboratory Multi-Radar Multi-Sensor product suite is used to verify model forecasts and climatologies of observed variables. Unobserved model fields are also examined to further illuminate important intermodel differences in storms and near-storm environments. No single model performed better than the others in all respects. However, there were many systematic intermodel and intercore differences in specific forecast metrics and model fields. Some of these differences can be confidently attributed to particular differences in model design. Model intercomparison studies similar to the one presented here are important to better understand the impacts of model and ensemble configurations on storm forecasts and to help optimize future operational CAM systems.
Attempts at probabilistic tornado forecasting using convection-allowing models (CAMs) have thus far used CAM attribute [e.g., hourly maximum 2–5-km updraft helicity (UH)] thresholds, treating them as binary events—either a grid point exceeds a given threshold or it does not. This study approaches these attributes probabilistically, using empirical observations of storm environment attributes and the subsequent climatological tornado occurrence frequency to assign a probability that a point will be within 40 km of a tornado, given the model-derived storm environment attributes. Combining empirical frequencies and forecast attributes produces better forecasts than solely using mid- or low-level UH, even if the UH is filtered using environmental parameter thresholds. Empirical tornado frequencies were derived using severe right-moving supercellular storms associated with a local storm report (LSR) of a tornado, severe wind, or severe hail for a given significant tornado parameter (STP) value from Storm Prediction Center (SPC) mesoanalysis grids in 2014–15. The NSSL–WRF ensemble produced the forecast STP values and simulated right-moving supercells, which were identified using a UH exceedance threshold. Model-derived probabilities are verified using tornado segment data from just right-moving supercells and from all tornadoes, as are the SPC-issued 0600 UTC tornado probabilities from the initial day 1 forecast valid 1200–1159 UTC the following day. The STP-based probabilistic forecasts perform comparably to SPC tornado probability forecasts in many skill metrics (e.g., reliability) and thus could be used as first-guess forecasts. Comparison with prior methodologies shows that probabilistic environmental information improves CAM-based tornado forecasts.
The National Severe Storms Lab (NSSL) Warn-on-Forecast System (WoFS) is an experimental real-time rapidly-updating convection-allowing ensemble that provides probabilistic short-term thunderstorm forecasts. This study evaluates the impacts of reducing the forecast model horizontal grid spacing Δx from 3 km to 1.5 km on the WoFS deterministic and probabilistic forecast skill, using eleven case days selected from the 2020 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiment (SFE). Verification methods include (i) subjective forecaster impressions; (ii) a deterministic object-based technique that identifies forecast reflectivity and rotation track storm objects as contiguous local maxima in the composite reflectivity and updraft helicity fields, respectively, and matches them to observed storm objects; and (iii) a recently developed algorithm that matches observed mesocyclones to mesocyclone probability swath objects constructed from the full ensemble of rotation track objects. Reducing Δx fails to systematically improve deterministic skill in forecasting reflectivity object occurrence, as measured by critical success index (CSIDET), a metric that incorporates both probability of detection (PODDET) and false alarm ratio (FARDET). However, compared to the Δx = 3 km configuration, the Δx = 1.5 km WoFS shows improved mid-level mesocyclone detection, as evidenced by its statistically significant (i) higher CSIDET for deterministic mid-level rotation track objects and (ii) higher normalized area under the performance diagram curve (NAUPDC) score for probability swath objects. Comparison between Δx = 3 km and Δx = 1.5 km reflectivity object properties reveals that the latter have 30% stronger mean updraft speeds, 17% stronger median 80-m winds, 67% larger median hail diameter, and 28% higher median near-storm-maximum 0-3 km storm-relative helicity.
Probabilistic ensemble-derived tornado forecasts generated from convection-allowing models often use hourly maximum updraft helicity (UH) alone or in combination with environmental parameters as a proxy for right-moving (RM) supercells. However, when UH occurrence is a condition for tornado probability generation, false alarm areas can occur from UH swaths associated with nocturnal mesoscale convective systems, which climatologically produce fewer tornadoes than RM supercells. This study incorporates UH timing information with the forecast near-storm significant tornado parameter (STP) to calibrate the forecast tornado probability. To generate the probabilistic forecasts, three sets of observed climatological tornado frequencies given an RM supercell and STP value are incorporated with the model output, two of which use UH timing information. One method uses the observed climatological tornado frequency for a given 3-h window to generate the probabilities. Another normalizes the observed climatological tornado frequency by the number of hail, wind, and tornado reports observed in that 3-h window compared to the maximum number of reports in any 3-h window. The final method is independent of when UH occurs and uses the observed climatological tornado frequency encompassing all hours. The normalized probabilities reduce the false alarm area compared to the other methods but have a smaller area under the ROC curve and require a much higher percentile of the STP distribution to be used in probability generation to become reliable. Case studies demonstrate that the normalized probabilities highlight the most likely area for evening RM supercellular tornadoes, decreasing the nocturnal false alarm by assuming a linear convective mode.
Verification methods for convection-allowing models (CAMs) should consider the fine-scale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table-based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities while optimizing the parameters based on the Brier Score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the United States’ Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.