Statistical downscaling (SD) is commonly used to provide information for the assessment of climate change impacts. Using as input the output from large-scale dynamical climate models and observation-based data products, SD aims to provide a finer grain of detail and to mitigate systematic biases. It is generally recognized as providing added value. However, one of the key assumptions of SD is that the relationships used to train the method during a historical period are unchanged in the future, in the face of climate change. The validity of this assumption is typically quite difficult to assess in the normal course of analysis, as observations of future climate are lacking. We approach this problem using a “perfect model” experimental design in which high-resolution dynamical climate model output is used as a surrogate for both past and future observations. We find that while SD in general adds considerable value, in certain well-defined circumstances it can produce highly erroneous results. Furthermore, the breakdown of SD in these contexts could not be foreshadowed during the typical course of evaluation based on only available historical data. We diagnose and explain the reasons for these failures in terms of physical, statistical, and methodological causes. These findings highlight the need for caution in the use of statistically downscaled products and the need for further research to consider other hitherto unknown pitfalls, perhaps utilizing more advanced perfect model designs than the one we have employed.
Observed temperature extremes over the continental United States can be represented by the ratio of daily record high temperatures to daily record low minimum temperatures, and this ratio has increased to a value of about 2 to 1, averaged over the first decade of the 21st century, albeit with large interannual variability. Two different versions of a global coupled climate model (CCSM4), as well as 23 other coupled model intercomparison project phase 5 (CMIP5) models, show larger values of this ratio than observations, mainly as a result of greater numbers of record highs since the 1980s compared with observations. This is partly because of the "warm 1930s" in the observations, which made it more difficult to set record highs later in the century, and partly because of a trend toward less rainfall and reduced evapotranspiration in the model versions compared with observations. We compute future projections of this ratio on the basis of its estimated dependence on mean temperature increase, which we find robustly at play in both observations and simulations. The use of this relation also has the advantage of removing dependence of a projection on a specific scenario. An empirical projection of the ratio of record highs to record lows is obtained from the nonlinear relationship in observations from 1930 to 2015, thus correcting downward the likely biased future projections of the model. For example, for a 3°C warming in US temperatures, the ratio of record highs to lows is projected to be ∼15 ± 8 compared to the present average ratio of just over 2. A n analysis of observed continental US record high maximum and record low minimum daily temperatures in a quality-controlled dataset of daily station data from 1950 to 2006 showed that the value of the ratio of record highs to record lows has been increasing over the United States since the late 1970s (1). Although there is considerable interannual variability in the ratio, which is to be expected when it is based on temperature time series with large interannual variability (2), averages of this ratio over the first decade of the 21st century had a value of about 2-1. This was a reflection of the increase of mean temperature and a shift of its distribution, affecting the tail behavior, such that, on average, for every one daily record low minimum, there were roughly two record high maxima. This result was subsequently reproduced (3) and was also shown for Europe (4). A similar ratio of about 2-1 for monthly temperature records over Australia was shown for roughly this same period (5). As noted in other studies (6), there are geographic and seasonal characteristics to these records that depend on the variance of the temperature time series (7). Possible future increases to this ratio over the United States were shown for one future emission scenario (1).Several questions were raised in these studies that we address here. First, the previous analysis (1) started in 1950 because of the desire to use the more abundant and higher-quality postwar daily temperature data. Ho...
Future climate projections illuminate our understanding of the climate system and generate data products often used in climate impact assessments. Statistical downscaling (SD) is commonly used to address biases in global climate models (GCM) and to translate large‐scale projected changes to the higher spatial resolutions desired for regional and local scale studies. However, downscaled climate projections are sensitive to method configuration and input data source choices made during the downscaling process that can affect a projection's ultimate suitability for particular impact assessments. Quantifying how changes in inputs or parameters affect SD‐generated projections of precipitation is critical for improving these datasets and their use by impacts researchers. Through analysis of a systematically designed set of 18 statistically downscaled future daily precipitation projections for the south‐central United States, this study aims to improve the guidance available to impacts researchers. Two statistical processing techniques are examined: a ratio delta downscaling technique and an equi‐ratio quantile mapping method. The projections are generated using as input results from three GCMs forced with representative concentration pathway (RCP) 8.5 and three gridded observation‐based data products. Sensitivity analyses identify differences in the values of precipitation variables among the projections and the underlying reasons for the differences.Results indicate that differences in how observational station data are converted to gridded daily observational products can markedly affect statistically downscaled future projections of wet‐day frequency, intensity of precipitation extremes, and the length of multi‐day wet and dry periods. The choice of downscaling technique also can affect the climate change signal for variables of interest, in some cases causing change signals to reverse sign. Hence, this study provides illustrations and explanations for some downscaled precipitation projection differences that users may encounter, as well as evidence of symptoms that can affect user decisions.
The cumulative distribution function transform (CDFt) downscaling method has been used widely to provide local-scale information and bias correction to output from physical climate models. The CDFt approach is one from the category of statistical downscaling methods that operates via transformations between statistical distributions. Although numerous studies have demonstrated that such methods provide value overall, much less effort has focused on their performance with regard to values in the tails of distributions. We evaluate the performance of CDFtgenerated tail values based on four distinct approaches, two native to CDFt and two of our own creation, in the context of a "Perfect Model" setting in which global climate model output is used as a proxy for both observational and model data. We find that the native CDFt approaches can have sub-optimal performance in the tails, particularly with regard to the maximum value. However, our alternative approaches provide substantial improvement.
Statistical downscaling methods are extensively used to refine future climate change projections produced by physical models. Distributional methods, which are among the simplest to implement, are also among the most widely used, either by themselves or in conjunction with more complex approaches. Here, building off of earlier work we evaluate the performance of seven methods in this class that range widely in their degree of complexity. We employ daily maximum temperature over the Continental U.S. in a “Perfect Model” approach in which the output from a large‐scale dynamical model is used as a proxy for both observations and model output. Importantly, this experimental design allows one to estimate expected performance under a future high‐emissions climate‐change scenario. We examine skill over the full distribution as well in the tails, seasonal variations in skill, and the ability to reproduce the climate change signal. Viewed broadly, there generally are modest overall differences in performance across the majority of the methods. However, the choice of philosophical paradigms used to define the downscaling algorithms divides the seven methods into two classes, of better versus poorer overall performance. In particular, the bias‐correction plus change‐factor approach performs better overall than the bias‐correction only approach. Finally, we examine the performance of some special tail treatments that we introduced in earlier work which were based on extensions of a widely used existing scheme. We find that our tail treatments provide a further enhancement in downscaling extremes.
Statistical downscaling (SD) methods used to refine future climate change projections produced by physical models have been applied to a variety of variables. We evaluate four empirical distributional type SD methods as applied to daily precipitation, which because of its binary nature (wet vs. dry days) and tendency for a long right tail presents a special challenge. Using data over the Continental U.S. we use a ‘Perfect Model’ approach in which data from a large‐scale dynamical model is used as a proxy for both observations and model output. This experimental design allows for an assessment of expected performance of SD methods in a future high‐emissions climate‐change scenario. We find performance is tied much more to configuration options rather than choice of SD method. In particular, proper handling of dry days (i.e., those with zero precipitation) is crucial to success. Although SD skill in reproducing day‐to‐day variability is modest (~15–25%), about half that found for temperature in our earlier work, skill is much greater with regards to reproducing the statistical distribution of precipitation (~50–60%). This disparity is the result of the stochastic nature of precipitation as pointed out by other authors. Distributional skill in the tails is lower overall (~30–35%), although in some regions and seasons it is small to non‐existent. Even when SD skill in the tails is reasonably good, in some instances, particularly in the southeastern United States during summer, absolute daily errors at some gridpoints can be large (~20 mm or more), highlighting the challenges in projecting future extremes.
2019 water-year mean streamflow into Chesapeake Bay from the Susquehanna River was the third highest since 1891. Anthropogenic climate change has increased the probability of extreme Susquehanna River mean streamflows.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.