Abstract:This is the submitted manuscript (pre-print). The final published version (version of record) is available online via Annual Reviews at http://dx.doi.org/10.1146/annurev-environ-110615-090011. Please refer to any applicable terms of use of the publisher.
University of Bristol -Explore Bristol Research General rightsThis document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pur… Show more
“…The primary reason for making uncertainty assessments for evaluating risk in natural hazards is because taking account of uncertainty might make a difference to the decision that is made (e.g. Hall and Solomatine, 2008;Rougier and Beven, 2013;Hall, 2003;and Simpson et al, 2016). For many decisions a complete, thoughtful, uncertainty assessment of risk might not be justified by the cost in time and effort.…”
Section: Assessing Whether a Decision Is Robust To The Chosen Assumptmentioning
confidence: 99%
“…Hall, 2006;Baroni and Tarantola, 2014;and Savage et al, 2016) or that allow us to explore the impact of distributions that are practically unconstrained due to a lack of observations (e.g. Prudhomme et al, 2010;Singh et al, 2014;Almeida et al, 2017). Alternatively, we can directly select an approach that attempts to find robust decisions in the presence of poorly bounded uncertainties (e.g.…”
Section: Assessing Whether a Decision Is Robust To The Chosen Assumptmentioning
Abstract. Part 1 of this paper has discussed the uncertainties arising from gaps in
knowledge or limited understanding of the processes involved in different
natural hazard areas. Such deficits may include uncertainties about
frequencies, process representations, parameters, present and future boundary
conditions, consequences and impacts, and the meaning of observations in
evaluating simulation models. These are the epistemic uncertainties that can
be difficult to constrain, especially in terms of event or scenario
probabilities, even as elicited probabilities rationalized on the basis of
expert judgements. This paper reviews the issues raised by trying to quantify
the effects of epistemic uncertainties. Such scientific uncertainties might
have significant influence on decisions made, say, for risk management, so it
is important to examine the sensitivity of such decisions to different
feasible sets of assumptions, to communicate the meaning of associated
uncertainty estimates, and to provide an audit trail for the analysis. A
conceptual framework for good practice in dealing with epistemic
uncertainties is outlined and the implications of applying the principles to
natural hazard assessments are discussed. Six stages are recognized, with
recommendations at each stage as follows: (1) framing the analysis, preferably with
input from potential users; (2) evaluating the available data for epistemic uncertainties,
especially when they might lead to inconsistencies; (3) eliciting information on sources
of uncertainty from experts; (4) defining a workflow that will give reliable and accurate
results; (5) assessing robustness to uncertainty, including the impact on any
decisions that are dependent on the analysis; and (6) communicating the findings and meaning
of the analysis to potential users, stakeholders, and decision makers. Visualizations are
helpful in conveying the nature of the uncertainty outputs, while recognizing that the
deeper epistemic uncertainties might not be readily amenable to visualizations.
“…The primary reason for making uncertainty assessments for evaluating risk in natural hazards is because taking account of uncertainty might make a difference to the decision that is made (e.g. Hall and Solomatine, 2008;Rougier and Beven, 2013;Hall, 2003;and Simpson et al, 2016). For many decisions a complete, thoughtful, uncertainty assessment of risk might not be justified by the cost in time and effort.…”
Section: Assessing Whether a Decision Is Robust To The Chosen Assumptmentioning
confidence: 99%
“…Hall, 2006;Baroni and Tarantola, 2014;and Savage et al, 2016) or that allow us to explore the impact of distributions that are practically unconstrained due to a lack of observations (e.g. Prudhomme et al, 2010;Singh et al, 2014;Almeida et al, 2017). Alternatively, we can directly select an approach that attempts to find robust decisions in the presence of poorly bounded uncertainties (e.g.…”
Section: Assessing Whether a Decision Is Robust To The Chosen Assumptmentioning
Abstract. Part 1 of this paper has discussed the uncertainties arising from gaps in
knowledge or limited understanding of the processes involved in different
natural hazard areas. Such deficits may include uncertainties about
frequencies, process representations, parameters, present and future boundary
conditions, consequences and impacts, and the meaning of observations in
evaluating simulation models. These are the epistemic uncertainties that can
be difficult to constrain, especially in terms of event or scenario
probabilities, even as elicited probabilities rationalized on the basis of
expert judgements. This paper reviews the issues raised by trying to quantify
the effects of epistemic uncertainties. Such scientific uncertainties might
have significant influence on decisions made, say, for risk management, so it
is important to examine the sensitivity of such decisions to different
feasible sets of assumptions, to communicate the meaning of associated
uncertainty estimates, and to provide an audit trail for the analysis. A
conceptual framework for good practice in dealing with epistemic
uncertainties is outlined and the implications of applying the principles to
natural hazard assessments are discussed. Six stages are recognized, with
recommendations at each stage as follows: (1) framing the analysis, preferably with
input from potential users; (2) evaluating the available data for epistemic uncertainties,
especially when they might lead to inconsistencies; (3) eliciting information on sources
of uncertainty from experts; (4) defining a workflow that will give reliable and accurate
results; (5) assessing robustness to uncertainty, including the impact on any
decisions that are dependent on the analysis; and (6) communicating the findings and meaning
of the analysis to potential users, stakeholders, and decision makers. Visualizations are
helpful in conveying the nature of the uncertainty outputs, while recognizing that the
deeper epistemic uncertainties might not be readily amenable to visualizations.
“…However, in a situation of a high degree of doubt, two approaches may make a difference: namely flexibility and robustness. Decisions are flexible to the extent that "[they] can be reversed or modified as more information becomes available in the future" [33]. They are robust to the extent that they are "resilient to surprise, immune to ignorance" [25].…”
Global issues are such that we should assess and manage a variety of risks and uncertainties. Due to increasing world complexity, the development of an adequate and innovative conceptual framework, anchored in the literature, is required. This article contributes to this effort with an approach particularly relevant to decision-makers dealing with threats of different natures, limited heterogeneous information, and experts' assessments tainted by doubts. Our approach is based on two pillars: 1) An "acuity scale", based on the probability of the occurrence of an event, its impact and the experts' degree of doubt; 2) A taxonomy focused on the concepts of risk, uncertainty, gamble and butterfly ambiguity. Accordingly, we present in a second step the major management implications of such approach. Global policy trends (e.g., sustainability transition) put energy sector decision-makers at the forefront of risk and uncertainty management. Consequently, we carry out a case study focused on Swiss energy policy since the 1980s, including its inception, the turnaround provoked by the Fukushima accident, and the government's 2050 energy strategy. Our investigation shows that the proposed conceptual framework allows for the development of an original analysis of the main drivers that influence governmental policies and stakeholder strategies.
“…These frameworks have appealing properties in conditions of deep uncertainty, and do not depend on characterizing uncertainty via probability distributions (Cox, ; Heal & Millner, ; Lempert & Collins, ). A final alternative is sequential strategies, which build flexibility into decision making through staged implementation of mitigation efforts, leaving space for adaptation to changing conditions (Simpson et al., ). The general point is that risk assessments are typically not a direct input to the decisionmaker, but rather fed into a broader analysis framework wherein decisionmakers’ (or societal) preferences are explicitly incorporated, for example, through assigning utilities, measures of risk aversion and equity, and so forth, or informally incorporated via structured decision processes.…”
Section: Null Hypothesis Testing Is a Poor Descriptive And Normative mentioning
Many philosophers and statisticians argue that risk assessors are morally obligated to evaluate the probabilities and consequences of methodological error, and to base their decisions of whether to adopt a given parameter value, model, or hypothesis on those considerations. This argument is couched within the rubric of null hypothesis testing, which I suggest is a poor descriptive and normative model for risk assessment. Risk regulation is not primarily concerned with evaluating the probability of data conditional upon the null hypothesis, but rather with measuring risks, estimating the consequences of available courses of action and inaction, formally characterizing uncertainty, and deciding what to do based upon explicit values and decision criteria. In turn, I defend an ideal of value‐neutrality, whereby the core inferential tasks of risk assessment—such as weighing evidence, estimating parameters, and model selection—should be guided by the aim of correspondence to reality. This is not to say that value judgments be damned, but rather that they should be accounted for within a structured approach to decision analysis, rather than embedded within risk assessment in an informal manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.