Climate change, Downscaling, General circulation model, Greenhouse gas emission scenarios, Northeast United States,
ABSTRACT:The asynchronous regional regression model (ARRM) is a flexible and computationally efficient statistical model that can downscale station-based or gridded daily values of any variable that can be transformed into an approximately symmetric distribution and for which a large-scale predictor exists. This technique was developed to bridge the gap between large-scale outputs from atmosphere-ocean general circulation models (AOGCMs) and the fine-scale output required for local and regional climate impact assessments. ARRM uses piecewise regression to quantify the relationship between observed and modelled quantiles and then downscale future projections. Here, we evaluate the performance of three successive versions of the model in downscaling daily minimum and maximum temperature and precipitation for 20 stations in North America from diverse climate zones. Using cross-validation to maximize the independent comparison period, historical downscaled simulations are evaluated relative to observations in terms of three different quantities: the probability distributions, giving a visual image of the skill of each model; root-mean-square errors; and bias in nine quantiles that represent both means and extremes. Successive versions of the model show improved accuracy in simulating extremes, where AOGCMs are often most biased and which are frequently the focus of impact studies. Overall, the quantile regression-based technique is shown to be efficient, robust, and highly generalizable across multiple variables, regions, and climate model inputs.
The ability of coupled atmosphere-ocean general circulation models (AOGCMs) to simulate variability in regional and global atmospheric dynamics is an important aspect of model evaluation. This is particularly true for recurring large-scale patterns known to be correlated with surface climate anomalies. Here, the authors evaluate the ability of all Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) historical Twentieth-Century Climate in Coupled Models (20C3M) AOGCM simulations for which the required output fields are available to simulate three patterns of large-scale atmospheric internal variability in the North Atlantic region: the Arctic Oscillation (AO), the North Atlantic Oscillation (NAO), and the Atlantic multidecadal oscillation (AMO); and three in the North Pacific region: the El Niñ o-Southern Oscillation (ENSO), the Pacific decadal oscillation (PDO), and the Pacific-North American Oscillation (PNA). These patterns are evaluated in two ways: first, in terms of their characteristic temporal variability and second, in terms of their magnitude and spatial locations.It is found that historical total-forcing simulations from many of the AOGCMs produce seasonal spatial patterns that clearly resemble the teleconnection patterns resulting from identical calculation methods applied to reanalysis and/or observed fields such as the 40-yr ECMWF Re-Analysis, NCEP-NCAR, or Kaplan sea surface temperatures (SSTs), with the exception of the lowest-frequency pattern, AMO, which is only reproduced by a few models. AOGCM simulations also show some significant biases in both spatial and temporal characteristics of the six patterns. Many models tend to either under-or overestimate the strength of the spatial patterns and exhibit rotation about the polar region or east-west displacement. Based on spectral analysis of the time series of each index, models also appear to vary in their ability to simulate the temporal variability of the teleconnection patterns, with some models producing oscillations that are too fast and others that are too slow relative to those observed. A few models produce a signal that is too periodic, most likely because of a failure to adequately simulate the natural chaotic behavior of the atmosphere. These results have implications for the selection and use of specific AOGCMs to simulate climate over the Northern Hemisphere, with some models being clearly more successful at (i.e., displaying less bias in) simulating large-scale, low-frequency patterns of temporal and spatial variability over the North Atlantic and Pacific regions relative to others.
Empirical statistical downscaling (ESD) methods seek to refine global climate model (GCM) outputs via processes that glean information from a combination of observations and GCM simulations. They aim to create value-added climate projections by reducing biases and adding finer spatial detail. Analysis techniques, such as cross-validation, allow assessments of how well ESD methods meet these goals during observational periods. However, the extent to which an ESD method's skill might differ when applied to future climate projections cannot be assessed readily in the same manner. Here we present a "perfect model" experimental design that quantifies aspects of ESD method performance for both historical and late 21st century time periods. The experimental design tests a key stationarity assumption inherent to ESD methods -namely, that ESD performance when applied to future projections is similar to that during the observational training period. Case study results employing a single ESD method (an Asynchronous Regional Regression Model variant) and climate variable (daily maximum temperature) demonstrate that violations of the stationarity assumption can vary geographically, seasonally, and with the amount of projected climate change. For the ESD method tested, the greatest challenges in downscaling daily maximum temperature projections are revealed to occur along coasts, in summer, and under conditions of Climatic Change (2016)
Thermal refugia underpin climate-smart management of coral reefs, but whether current thermal refugia will remain so under future warming is uncertain. We use statistical downscaling to provide the highest resolution thermal stress projections (0.01°/1 km, >230,000 reef pixels) currently available for coral reefs and identify future refugia on locally manageable scales. Here, we show that climate change will overwhelm current local-scale refugia, with declines in global thermal refugia from 84% of global coral reef pixels in the present-day climate to 0.2% at 1.5°C, and 0% at 2.0°C of global warming. Local-scale oceanographic features such as upwelling and strong ocean currents only rarely provide future thermal refugia. We confirm that warming of 1.5°C relative to pre-industrial levels will be catastrophic for coral reefs. Focusing management efforts on thermal refugia may only be effective in the short-term. Promoting adaptation to higher temperatures and facilitating migration will instead be needed to secure coral reef survival.
Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training observations used at the montane landscape of the Hubbard Brook Experimental Forest, New Hampshire, USA. We evaluated three downscaling methods: the delta method (or the change factor method), monthly quantile mapping (Bias Correction-Spatial Disaggregation, or BCSD), and daily quantile regression (Asynchronous Regional Regression Model, or ARRM). Additionally, we trained outputs from four atmosphere-ocean general circulation models (AOGCMs) (CCSM3, HadCM3, PCM, and GFDL-CM2.1) driven by higher (A1fi) and lower (B1) future emissions scenarios on two sets of observations (1/8º resolution grid vs. individual weather station) to generate the high-resolution climate input for the forest biogeochemical model PnET-BGC (eight ensembles of six runs).The choice of downscaling approach and spatial resolution of the observations used to train the downscaling model impacted modeled soil moisture and streamflow, which in turn affected forest growth, net N mineralization, net soil nitrification, and stream chemistry. All three downscaling methods were highly sensitive to the observations used, resulting in projections that were significantly different between station-based and grid-based observations. The choice of downscaling method also slightly affected the results, however not as much as the choice of observations. Using spatially smoothed gridded observations and/or methods that do not resolve sub-monthly shifts in the distribution of temperature and/or precipitation can produce biased results in model applications run at greater temporal and/or spatial resolutions. These results underscore the importance of carefully considering field observations used for training, as well as the downscaling method used to generate climate change projections, for smaller-scale modeling studies. Different sources of variability including selection of AOGCM, emissions scenario, downscaling technique, and data used for training downscaling models, result in a wide range of projected forest ecosystem responses to future climate change.
Excess nitrogen (N) is a primary driver of freshwater and coastal eutrophication globally, and urban stormwater is a rapidly growing source of N pollution. Stormwater best management practices (BMPs) are used widely to remove excess N from runoff in urban and suburban areas, and are expected to perform under a wide variety of environmental conditions. Yet the capacity of BMPs to retain excess N varies; and both the variation and the drivers thereof are largely unknown, hindering the ability of water resource managers to meet water quality targets in a cost-effective way. Here, we use structured expert judgment (SEJ), a performance-weighted method of expert elicitation, to quantify the uncertainty in BMP performance under a range of site-specific environmental conditions and to estimate the extent to which key environmental factors influence variation in BMP performance. We hypothesized that rain event frequency and magnitude, BMP type and size, and physiographic province would significantly influence the experts' estimates of N retention by BMPs common to suburban Piedmont and Coastal Plain watersheds of the Chesapeake Bay region.Expert knowledge indicated wide uncertainty in BMP performance, with N removal efficiencies ranging from <0% (BMP acting as a source of N during a rain event) to >40%. Experts believed that the amount of rain was the primary identifiable source of variability in BMP efficiency, which is relevant given climate projections of more frequent heavy rain events in the mid-Atlantic. To assess the extent to which those projected changes might alter N export from suburban BMPs and watersheds, we combined downscaled estimates of rainfall with distributions of N loads for different-sized rain events derived from our elicitation. The model predicted higher and more variable N loads under a projected future climate regime, suggesting that current BMP regulations for reducing nutrients may be inadequate in the future.
We synthesize the interconnected impacts of Texas’ water and energy resources and infrastructure including the cascading effects due to Winter Storm Uri. The government’s preparedness, communication, policies, and response as well as storm impacts on vulnerable communities are evaluated using available information and data. Where knowledge gaps exist, we propose potential research to elucidate health, environmental, policy, and economic impacts of the extreme weather event. We expect that recommendations made here — while specific to the situation and outcomes of Winter Storm Uri — will increase Texas’ resilience to other extreme weather events not discussed in this paper. We found that out of 14 million residents who were on boil water notices, those who were served by very small water systems went, on average, a minimum of three days longer without potable water. Available county-level data do not indicate vulnerable communities went longer periods of time without power or water during the event. More resolved data are required to understand who was most heavily impacted at the community or neighborhood level. Gaps in government communication, response, and policy are discussed, including issues with identifying — and securing power to — critical infrastructure and the fact that the state’s Emergency Alert System was not used consistently to update Texans during the crisis. Finally, research recommendations are made to bolster weaknesses discovered during and after the storm including (1) reliable communication strategies, (2) reducing disproportionate impacts to vulnerable communities, (3) human health impacts, (4) increasing water infrastructure resilience, and (5) how climate change could impact infrastructure resilience into the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.