Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases
Abstract. The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is an international collaborative effort to understand and quantify the uncertainties in atmospheric river (AR) science based on detection algorithm alone. Currently, there are many AR identification and tracking algorithms in the literature with a wide range of techniques and conclusions. ARTMIP strives to provide the community with information on different methodologies and provide guidance on the most appropriate algorithm for a given science question or region of interest. All ARTMIP participants will implement their detection algorithms on a specified common dataset for a defined period of time. The project is divided into two phases: Tier 1 will utilize the Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) reanalysis from January 1980 to June 2017 and will be used as a baseline for all subsequent comparisons. Participation in Tier 1 is required. Tier 2 will be optional and include sensitivity studies designed around specific science questions, such as reanalysis uncertainty and climate change. High-resolution reanalysis and/or model output will be used wherever possible. Proposed metrics include AR frequency, duration, intensity, and precipitation attributable to ARs. Here, we present the ARTMIP experimental design, timeline, project requirements, and a brief description of the variety of methodologies in the current literature. We also present results from our 1-month “proof-of-concept” trial run designed to illustrate the utility and feasibility of the ARTMIP project.
Atmospheric rivers (ARs) are now widely known for their association with high‐impact weather events and long‐term water supply in many regions. Researchers within the scientific community have developed numerous methods to identify and track of ARs—a necessary step for analyses on gridded data sets, and objective attribution of impacts to ARs. These different methods have been developed to answer specific research questions and hence use different criteria (e.g., geometry, threshold values of key variables, and time dependence). Furthermore, these methods are often employed using different reanalysis data sets, time periods, and regions of interest. The goal of the Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is to understand and quantify uncertainties in AR science that arise due to differences in these methods. This paper presents results for key AR‐related metrics based on 20+ different AR identification and tracking methods applied to Modern‐Era Retrospective Analysis for Research and Applications Version 2 reanalysis data from January 1980 through June 2017. We show that AR frequency, duration, and seasonality exhibit a wide range of results, while the meridional distribution of these metrics along selected coastal (but not interior) transects are quite similar across methods. Furthermore, methods are grouped into criteria‐based clusters, within which the range of results is reduced. AR case studies and an evaluation of individual method deviation from an all‐method mean highlight advantages/disadvantages of certain approaches. For example, methods with less (more) restrictive criteria identify more (less) ARs and AR‐related impacts. Finally, this paper concludes with a discussion and recommendations for those conducting AR‐related research to consider.
California’s Sierra Nevada is a high-elevation mountain range with significant seasonal snow cover. Under anthropogenic climate change, amplification of the warming is expected to occur at elevations near snow margins due to snow albedo feedback. However, climate change projections for the Sierra Nevada made by global climate models (GCMs) and statistical downscaling methods miss this key process. Dynamical downscaling simulates the additional warming due to snow albedo feedback. Ideally, dynamical downscaling would be applied to a large ensemble of 30 or more GCMs to project ensemble-mean outcomes and intermodel spread, but this is far too computationally expensive. To approximate the results that would occur if the entire GCM ensemble were dynamically downscaled, a hybrid dynamical–statistical downscaling approach is used. First, dynamical downscaling is used to reconstruct the historical climate of the 1981–2000 period and then to project the future climate of the 2081–2100 period based on climate changes from five GCMs. Next, a statistical model is built to emulate the dynamically downscaled warming and snow cover changes for any GCM. This statistical model is used to produce warming and snow cover loss projections for all available CMIP5 GCMs. These projections incorporate snow albedo feedback, so they capture the local warming enhancement (up to 3°C) from snow cover loss that other statistical methods miss. Capturing these details may be important for accurately projecting impacts on surface hydrology, water resources, and ecosystems.
In this study (Part I), the mid-twenty-first-century surface air temperature increase in the entire CMIP5 ensemble is downscaled to very high resolution (2 km) over the Los Angeles region, using a new hybrid dynamical–statistical technique. This technique combines the ability of dynamical downscaling to capture finescale dynamics with the computational savings of a statistical model to downscale multiple GCMs. First, dynamical downscaling is applied to five GCMs. Guided by an understanding of the underlying local dynamics, a simple statistical model is built relating the GCM input and the dynamically downscaled output. This statistical model is used to approximate the warming patterns of the remaining GCMs, as if they had been dynamically downscaled. The full 32-member ensemble allows for robust estimates of the most likely warming and uncertainty resulting from intermodel differences. The warming averaged over the region has an ensemble mean of 2.3°C, with a 95% confidence interval ranging from 1.0° to 3.6°C. Inland and high elevation areas warm more than coastal areas year round, and by as much as 60% in the summer months. A comparison to other common statistical downscaling techniques shows that the hybrid method produces similar regional-mean warming outcomes but demonstrates considerable improvement in capturing the spatial details. Additionally, this hybrid technique incorporates an understanding of the physical mechanisms shaping the region’s warming patterns, enhancing the credibility of the final results.
Future snowfall and snowpack changes over the mountains of Southern California are projected using a new hybrid dynamical–statistical framework. Output from all general circulation models (GCMs) in phase 5 of the Coupled Model Intercomparison Project archive is downscaled to 2-km resolution over the region. Variables pertaining to snow are analyzed for the middle (2041–60) and end (2081–2100) of the twenty-first century under two representative concentration pathway (RCP) scenarios: RCP8.5 (business as usual) and RCP2.6 (mitigation). These four sets of projections are compared with a baseline reconstruction of climate from 1981 to 2000. For both future time slices and scenarios, ensemble-mean total winter snowfall loss is widespread. By the mid-twenty-first century under RCP8.5, ensemble-mean winter snowfall is about 70% of baseline, whereas the corresponding value for RCP2.6 is somewhat higher (about 80% of baseline). By the end of the century, however, the two scenarios diverge significantly. Under RCP8.5, snowfall sees a dramatic further decline; 2081–2100 totals are only about half of baseline totals. Under RCP2.6, only a negligible further reduction from midcentury snowfall totals is seen. Because of the spread in the GCM climate projections, these figures are all associated with large intermodel uncertainty. Snowpack on the ground, as represented by 1 April snow water equivalent is also assessed. Because of enhanced snowmelt, the loss seen in snowpack is generally 50% greater than that seen in winter snowfall. By midcentury under RCP8.5, warming-accelerated spring snowmelt leads to snow-free dates that are about 1–3 weeks earlier than in the baseline period.
Using the hybrid downscaling technique developed in part I of this study, temperature changes relative to a baseline period (1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000) in the greater Los Angeles region are downscaled for two future time slices: midcentury (2041-60) and end of century (2081-2100). Two representative concentration pathways (RCPs) are considered, corresponding to greenhouse gas emission reductions over coming decades (RCP2.6) and to continued twenty-first-century emissions increases (RCP8.5). All available global climate models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are downscaled to provide likelihood and uncertainty estimates. By the end of century under RCP8.5, a distinctly new regional climate state emerges: average temperatures will almost certainly be outside the interannual variability range seen in the baseline. Except for the highest elevations and a narrow swath very near the coast, land locations will likely see 60-90 additional extremely hot days per year, effectively adding a new season of extreme heat. In mountainous areas, a majority of the many baseline days with freezing nighttime temperatures will most likely not occur. According to a similarity metric that measures daily temperature variability and the climate change signal, the RCP8.5 end-of-century climate will most likely be only about 50% similar to the baseline. For midcentury under RCP2.6 and RCP8.5 and end of century under RCP2.6, these same measures also indicate a detectable though less significant climatic shift. Therefore, while measures reducing global emissions would not prevent climate change at this regional scale in the coming decades, their impact would be dramatic by the end of the twenty-first century.Corresponding author address: Fengpeng Sun, 7343 Math Sciences Building, 405 Hilgard Ave.,
High-resolution gridded datasets are in high demand because they are spatially complete and include important finescale details. Previous assessments have been limited to two to three gridded datasets or analyzed the datasets only at the station locations. Here, eight high-resolution gridded temperature datasets are assessed two ways: at the stations, by comparing with Global Historical Climatology Network–Daily data; and away from the stations, using physical principles. This assessment includes six station-based datasets, one interpolated reanalysis, and one dynamically downscaled reanalysis. California is used as a test domain because of its complex terrain and coastlines, features known to differentiate gridded datasets. As expected, climatologies of station-based datasets agree closely with station data. However, away from stations, spread in climatologies can exceed 6°C. Some station-based datasets are very likely biased near the coast and in complex terrain, due to inaccurate lapse rates. Many station-based datasets have large unphysical trends (>1°C decade−1) due to unhomogenized or missing station data—an issue that has been fixed in some datasets by using homogenization algorithms. Meanwhile, reanalysis-based gridded datasets have systematic biases relative to station data. Dynamically downscaled reanalysis has smaller biases than interpolated reanalysis, and has more realistic variability and trends. Dynamical downscaling also captures snow–albedo feedback, which station-based datasets miss. Overall, these results indicate that 1) gridded dataset choice can be a substantial source of uncertainty, and 2) some datasets are better suited for certain applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.