Statistical downscaling (SD) is commonly used to provide information for the assessment of climate change impacts. Using as input the output from large-scale dynamical climate models and observation-based data products, SD aims to provide a finer grain of detail and to mitigate systematic biases. It is generally recognized as providing added value. However, one of the key assumptions of SD is that the relationships used to train the method during a historical period are unchanged in the future, in the face of climate change. The validity of this assumption is typically quite difficult to assess in the normal course of analysis, as observations of future climate are lacking. We approach this problem using a “perfect model” experimental design in which high-resolution dynamical climate model output is used as a surrogate for both past and future observations. We find that while SD in general adds considerable value, in certain well-defined circumstances it can produce highly erroneous results. Furthermore, the breakdown of SD in these contexts could not be foreshadowed during the typical course of evaluation based on only available historical data. We diagnose and explain the reasons for these failures in terms of physical, statistical, and methodological causes. These findings highlight the need for caution in the use of statistically downscaled products and the need for further research to consider other hitherto unknown pitfalls, perhaps utilizing more advanced perfect model designs than the one we have employed.
Observed temperature extremes over the continental United States can be represented by the ratio of daily record high temperatures to daily record low minimum temperatures, and this ratio has increased to a value of about 2 to 1, averaged over the first decade of the 21st century, albeit with large interannual variability. Two different versions of a global coupled climate model (CCSM4), as well as 23 other coupled model intercomparison project phase 5 (CMIP5) models, show larger values of this ratio than observations, mainly as a result of greater numbers of record highs since the 1980s compared with observations. This is partly because of the "warm 1930s" in the observations, which made it more difficult to set record highs later in the century, and partly because of a trend toward less rainfall and reduced evapotranspiration in the model versions compared with observations. We compute future projections of this ratio on the basis of its estimated dependence on mean temperature increase, which we find robustly at play in both observations and simulations. The use of this relation also has the advantage of removing dependence of a projection on a specific scenario. An empirical projection of the ratio of record highs to record lows is obtained from the nonlinear relationship in observations from 1930 to 2015, thus correcting downward the likely biased future projections of the model. For example, for a 3°C warming in US temperatures, the ratio of record highs to lows is projected to be ∼15 ± 8 compared to the present average ratio of just over 2. A n analysis of observed continental US record high maximum and record low minimum daily temperatures in a quality-controlled dataset of daily station data from 1950 to 2006 showed that the value of the ratio of record highs to record lows has been increasing over the United States since the late 1970s (1). Although there is considerable interannual variability in the ratio, which is to be expected when it is based on temperature time series with large interannual variability (2), averages of this ratio over the first decade of the 21st century had a value of about 2-1. This was a reflection of the increase of mean temperature and a shift of its distribution, affecting the tail behavior, such that, on average, for every one daily record low minimum, there were roughly two record high maxima. This result was subsequently reproduced (3) and was also shown for Europe (4). A similar ratio of about 2-1 for monthly temperature records over Australia was shown for roughly this same period (5). As noted in other studies (6), there are geographic and seasonal characteristics to these records that depend on the variance of the temperature time series (7). Possible future increases to this ratio over the United States were shown for one future emission scenario (1).Several questions were raised in these studies that we address here. First, the previous analysis (1) started in 1950 because of the desire to use the more abundant and higher-quality postwar daily temperature data. Ho...
Future climate projections illuminate our understanding of the climate system and generate data products often used in climate impact assessments. Statistical downscaling (SD) is commonly used to address biases in global climate models (GCM) and to translate large‐scale projected changes to the higher spatial resolutions desired for regional and local scale studies. However, downscaled climate projections are sensitive to method configuration and input data source choices made during the downscaling process that can affect a projection's ultimate suitability for particular impact assessments. Quantifying how changes in inputs or parameters affect SD‐generated projections of precipitation is critical for improving these datasets and their use by impacts researchers. Through analysis of a systematically designed set of 18 statistically downscaled future daily precipitation projections for the south‐central United States, this study aims to improve the guidance available to impacts researchers. Two statistical processing techniques are examined: a ratio delta downscaling technique and an equi‐ratio quantile mapping method. The projections are generated using as input results from three GCMs forced with representative concentration pathway (RCP) 8.5 and three gridded observation‐based data products. Sensitivity analyses identify differences in the values of precipitation variables among the projections and the underlying reasons for the differences.Results indicate that differences in how observational station data are converted to gridded daily observational products can markedly affect statistically downscaled future projections of wet‐day frequency, intensity of precipitation extremes, and the length of multi‐day wet and dry periods. The choice of downscaling technique also can affect the climate change signal for variables of interest, in some cases causing change signals to reverse sign. Hence, this study provides illustrations and explanations for some downscaled precipitation projection differences that users may encounter, as well as evidence of symptoms that can affect user decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.