Floods, wildfires, heatwaves and droughts often result from a combination of interacting physical processes across multiple spatial and temporal scales. The combination of processes (climate drivers and hazards) leading to a significant impact is referred to as a 'compound event'. Traditional risk assessment methods typically only consider one driver and/or hazard at a time, potentially leading to underestimation of risk, as the processes that cause extreme events often interact and are spatially and/or temporally dependent. Here we show how a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers, who need to work closely together to understand these complex events.
Climate and weather variables such as rainfall, temperature, and pressure are indicators for hazards such as tropical cyclones, floods, and fires. The impact of these events can be due to a single variable being in an extreme state, but more often it is the result of a combination of variables not all of which are necessarily extreme. Here, the combination of variables or events that lead to an extreme impact is referred to as a compound event. Any given compound event will depend upon the nature and number of physical variables, the range of spatial and temporal scales, the strength of dependence between processes, and the perspective of the stakeholder who defines the impact. Modeling compound events is a large, complex, and interdisciplinary undertaking. To facilitate this task we propose the use of influence diagrams for defining, mapping, analyzing, modeling, and communicating the risk of the compound event. Ultimately, a greater appreciation of compound events will lead to further insight and a changed perspective on how impact risks are associated with climate‐related hazards. WIREs Clim Change 2014, 5:113–128. doi: 10.1002/wcc.252 This article is categorized under: Climate Models and Modeling > Knowledge Generation with Models Assessing Impacts of Climate Change > Representing Uncertainty
This paper presents a strategy for diagnosing and interpreting hydrological nonstationarity, aiming to improve hydrological models and their predictive ability under changing hydroclimatic conditions. The strategy consists of four elements: (i) detecting potential systematic errors in the calibration data; (ii) hypothesizing a set of ''nonstationary'' parameterizations of existing hydrological model structures, where one or more parameters vary in time as functions of selected covariates; (iii) trialing alternative stationary model structures to assess whether parameter nonstationarity can be reduced by modifying the model structure; and (iv) selecting one or more models for prediction. The Scott Creek catchment in South Australia and the lumped hydrological model GR4J are used to illustrate the strategy. Streamflow predictions improve significantly when the GR4J parameter describing the maximum capacity of the production store is allowed to vary in time as a combined function of: (i) an annual sinusoid; (ii) the previous 365 day rainfall and potential evapotranspiration; and (iii) a linear trend. This improvement provides strong evidence of model nonstationarity. Based on a range of hydrologically oriented diagnostics such as flow-duration curves, the GR4J model structure was modified by introducing an additional calibration parameter that controls recession behavior and by making actual evapotranspiration dependent only on catchment storage. Model comparison using an information-theoretic measure (the Akaike Information Criterion) and several hydrologically oriented diagnostics shows that the GR4J modifications clearly improve predictive performance in Scott Creek catchment. Based on a comparison of 22 versions of GR4J with different representations of nonstationarity and other modifications, the model selection approach applied in the exploratory period (used for parameter estimation) correctly identifies models that perform well in a much drier independent confirmatory period.
Anthropogenic climate change is expected to affect global river flow. Here, we analyze time series of low, mean, and high river flows from 7250 observatories around the world covering the years 1971 to 2010. We identify spatially complex trend patterns, where some regions are drying and others are wetting consistently across low, mean, and high flows. Trends computed from state-of-the-art model simulations are consistent with the observations only if radiative forcing that accounts for anthropogenic climate change is considered. Simulated effects of water and land management do not suffice to reproduce the observed trend pattern. Thus, the analysis provides clear evidence for the role of externally forced climate change as a causal driver of recent trends in mean and extreme river flow at the global scale.
Accounting for dependence between extreme rainfall and storm surge can be critical for correctly estimating coastal flood risk. Several statistical methods are available for modeling such extremal dependence, but the comparative performance of these methods for quantifying the exceedance probability of rare coastal floods is unknown. This paper compares three classes of statistical methods-thresholdexcess, point process, and conditional-in terms of their ability to quantify flood risk. The threshold-excess method offers approximately unbiased estimates for dependence parameters, but its application for quantifying flood risk is limited because it is unable to handle situations where only one of the two variables is extreme. In contrast, the point process method (with the logistic and negative logistic models) and the conditional method describe the full distribution of extremes, but they overestimate and underestimate the dependence strength, respectively. We conclude that the point process method is the most suitable approach for modeling dependence between extreme rainfall and storm surge when the dependence is relatively strong, while none of the three methods produces satisfactory results for bivariate extremes with very weak dependence. It is therefore important to take the bias of each method into account when applying them to flood estimation problems. A case study is used to demonstrate the three statistical methods and illustrate the implication of dependence to flood risk.
This study investigates global changes in indicators of mean and extreme streamflow. The assessment is based on the Global Streamflow Indices and Metadata archive and focuses on time series of the annual minimum, the 10th, 50th, and 90th percentiles, the annual mean, and the annual maximum of daily streamflow. Trends are estimated using the Sen‐Theil slope, and the significance of mean regional trends is established through bootstrapping. Changes in the indices are often regionally consistent, showing that the entire flow distribution is moving either upward or downward. In addition, the analysis confirms the complex nature of hydrological change where drying in some regions (e.g., in the Mediterranean) is contrasted by wetting in other regions (e.g., North Asia). Observed changes are discussed in the context of previous results and with respect to model estimates of the impacts of anthropogenic climate change and human water management.
This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-toaccess streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477):(1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle. flow archive. Firstly, there are threats to the quantity of data, such as political sensitivities (Nelson, 2009), cost recovery and strict access policies (Hannah et al., 2011), unavailability in an electronic format, consistency of data formats, limited documentation, missing metadata, and a lack of resources for database maintenance and updating. Secondly, there are difficulties associated with the quality of the data in many regions, such as poor spatial coverage, poor quality control, variable quality control between regions, inconsistent metadata, imprecise geographic coordinates of the site, changes in the density of stream gauges, and variable record lengths. Lastly, even in locations where there are abundant and highquality streamflow observations, there can be questions over its utility in specific research such as climate sensitivity analysis due to the manifestation of human impacts -for example, urbanization, land-use changes, channelization, and upstream dams (Hannah et al., 2011).Published by Copernicus Publications.
Abstract. This is Part 2 of a two-paper series presenting the Global Streamflow Indices and Metadata Archive (GSIM), which is a collection of daily streamflow observations at more than 30 000 stations around the world. While Part 1 (Do et al., 2018a) describes the data collection process as well as the generation of auxiliary catchment data (e.g. catchment boundary, land cover, mean climate), Part 2 introduces a set of quality controlled time-series indices representing (i) the water balance, (ii) the seasonal cycle, (iii) low flows and (iv) floods. To this end we first consider the quality of individual daily records using a combination of quality flags from data providers and automated screening methods. Subsequently, streamflow time-series indices are computed for yearly, seasonal and monthly resolution. The paper provides a generalized assessment of the homogeneity of all generated streamflow time-series indices, which can be used to select time series that are suitable for a specific task. The newly generated global set of streamflow time-series indices is made freely available with an digital object identifier at https://doi.pangaea.de/10.1594/PANGAEA.887470 and is expected to foster global freshwater research, by acting as a ground truth for model validation or as a basis for assessing the role of human impacts on the terrestrial water cycle. It is hoped that a renewed interest in streamflow data at the global scale will foster efforts in the systematic assessment of data quality and provide momentum to overcome administrative barriers that lead to inconsistencies in global collections of relevant hydrological observations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.