The usefulness of any model is in part dependent on the accuracy and reliability of its output data. Yet, because all models are abstractions of reality, and because precise input data are rarely if ever available, all output values are subject to imprecision. The input data and modeling uncertainties are not independent of each other. They can interact in various ways. The end result is imprecision and uncertainty associated with model output. This chapter focuses on ways of identifying, quantifying, and communicating the uncertainties in model outputs.
IntroductionModels are the primary way we have to estimate the multiple impacts of alternative water resource system design and operating policies. Models are used to estimate the values of various system performance indicators resulting from specific design and/or operating policy decisions. Model outputs are based on model structure, hydrologic and other time series inputs and a host of parameters whose values characterize the system being simulated. Even if these assumptions and input data reflect, or are at least representative of, conditions believed to be true, we know the model outputs or results will be wrong. Our models are always simplifications of the real systems we are analyzing. Furthermore, we simply cannot forecast the future with precision. So we know the model outputs defining future conditions are uncertain estimates, at best. Some input data uncertainties can be reduced by additional research and further data collection and analysis. Before spending money and time to gather and analyze additional data, it is reasonable to ask what improvement in estimates of system performance or what reduction in the uncertainty associated with those estimates would result if all data and model uncertainties could be reduced if not eliminated. Such information helps determine how much one would be willing to "pay" to reduce model output uncertainty. If the uncertainty on average is costing a lot, it may pay to invest in additional data collection, in more studies, or in developing better models, all aimed at reducing that uncertainty. If that uncertainty only a very modest, impact on the likely decision that is to be made, one should find other issues to worry about.If it appears that reducing uncertainty is worthwhile, then the question is how best to do it. If doing this involves obtaining additional information, then it is clear that the value of this additional information, however measured, should exceed the cost of obtaining it. The value of such information will be the benefits of more precise estimates of system performance, or the reduction of the uncertainty, that one can expect from obtaining such information. If additional information is to be obtained, it should be focused on that which reduces the uncertainties considered important, not the unimportant ones.This chapter reviews some methods for identifying and communicating model output uncertainty. The discussion begins with a review of the