Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather warning system is used. The work reported here tested the relative benefits of several forecast formats, comparing decisions made with and without uncertainty forecasts. In three experiments, participants assumed the role of a manager of a road maintenance company in charge of deciding whether to pay to salt the roads and avoid a potential penalty associated with icy conditions. Participants used overnight low temperature forecasts accompanied in some conditions by uncertainty estimates and in others by decision advice comparable to categorical warnings. Results suggested that uncertainty information improved decision quality overall and increased trust in the forecast. Participants with uncertainty forecasts took appropriate precautionary action and withheld unnecessary action more often than did participants using deterministic forecasts. When error in the forecast increased, participants with conventional forecasts were reluctant to act. However, this effect was attenuated by uncertainty forecasts. Providing categorical decision advice alone did not improve decisions. However, combining decision advice with uncertainty estimates resulted in the best performance overall. The results reported here have important implications for the development of forecast formats to increase compliance with severe weather warnings as well as other domains in which one must act in the face of uncertainty.
ABSTRACT:The general public understands that there is uncertainty inherent in deterministic forecasts as well as understanding some of the factors that increase uncertainty. This was determined in an online survey of 1340 residents of Washington and Oregon, USA. Understanding was probed using questions that asked participants what they expected to observe when given a deterministic forecast with a specified lead time, for a particular weather parameter, during a particular time of year. It was also probed by asking participants to estimate the number of observations, out of 100, that they expected to fall within specified ranges around the deterministic forecast. Almost all respondents (99.99%) anticipated some uncertainty in the deterministic forecast. Furthermore, their answers suggested that they expected greater uncertainty for longer lead times when the forecasted value deviated from climatic norms. Perhaps most noteworthy was that they expected specific forecast biases (e.g. over-forecasting of extremes), most of which were not borne out by an analysis of local National Weather Service verification data. In summary, users had well-formed uncertainty expectations suggesting that they are prepared to understand explicit uncertainty forecasts for a wide range of parameters. Indeed, explicit uncertainty estimates may be necessary to overcome some of the anticipated forecast biases that may be affecting the usefulness of existing weather forecasts. Despite the fact that these bias expectations are largely unjustified, they could lead to adjustment of forecasts that could in turn have serious negative consequences for users, especially with respect to extreme weather warnings.
Each of us makes important decisions involving uncertainty in domains in which we are not experts, such as retirement planning, medical treatment, and precautions against severe weather. Often, reliable information about uncertainty is available to us, although how effectively we incorporate it into the decision process remains in question. Previous research suggests that people are error-prone when reasoning with probability. However, recent research in weatherrelated decision making is more encouraging. Unlike earlier work that compares people's decisions with a rational standard, this research compares decisions made by people with and without uncertainty information. The results suggest that including specific numeric uncertainty estimates in weather forecasts increases trust and gives people a better idea of what to expect in terms of both the range of possible outcomes and the amount of uncertainty in the particular situation, all of which benefit precautionary decisions. However, the advantage for uncertainty estimates depends critically on how they are expressed. It is crucial that the expression is compatible with both the decision task and cognitive processes of the user.
Despite improvements in forecasting extreme weather events, noncompliance with weather warnings among the public remains a problem. Although there are likely many reasons for noncompliance with weather warnings, one important factor might be people's past experiences with false alarms. The research presented here explores the role of false alarms in weather-related decision making. Over a series of trials, participants used an overnight low temperature forecast and advice from a decision aid to decide whether to apply salt treatment to a town's roads to prevent icy conditions or take the risk of withholding treatment, which resulted in a large penalty when freezing temperatures occurred. The decision aid gave treatment recommendations, some of which were false alarms, i.e., treatment was recommended but observed temperatures were above freezing. The rate at which the advice resulted in false alarms was manipulated between groups. Results suggest that very high and very low false alarm rates led to inferior decision making, but that lowering the false alarm rate slightly did not significantly affect compliance or decision quality. However, adding a probabilistic uncertainty estimate in the forecasts improved both compliance and decision quality. These findings carry implications about how weather warnings should be communicated to the public.
Verification scientists and practitioners came together at the 5 th International Verification Methods Workshop in Melbourne, Australia, in December 2011 to discuss methods for evaluating forecasts within a wide variety of applications. Progress has been made in many areas including improved verification reporting, wider use of diagnostic verification, development of new scores and techniques for difficult problems, and evaluation of forecasts for applications using meteorological information. There are many interesting challenges, particularly the improvement of methods to verify high resolution ensemble forecasts, seamless predictions spanning multiple spatial and temporal scales, and multivariate forecasts. Greater efforts are needed to make best use of new observations, forge greater links between data assimilation and verification, and develop better and more intuitive forecast verification products for end-users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.