Purpose
The purpose of this paper is to discuss how, despite increasing data availability from a wide range of sources unlocks unprecedented opportunities for disaster risk reduction, data interoperability remains a challenge due to a number of barriers. As a first step to enhancing data interoperability for disaster risk reduction is to identify major barriers, this paper presents a case study on data interoperability in disaster risk reduction in Europe, linking current barriers to the regional initiative of the European Science and Technology Advisory Group.
Design/methodology/approach
In support of Priority 2 (“Strengthening disaster risk governance to manage disaster risk”) of the Sendai Framework and SDG17 (“Partnerships for the goals”), this paper presents a case study on barriers to data interoperability in Europe based on a series of reviews, surveys and interviews with National Sendai Focal Points and stakeholders in science and research, governmental agencies, non-governmental organizations and industry.
Findings
For a number of European countries, there remains a clear imbalance between long-term disaster risk reduction and short-term preparation and the dominant role of emergency relief, response and recovery, pointing to the potential of investments in ex ante measures with better inclusion and exploitation of data.
Originality/value
Modern society is facing a digital revolution. As highlighted by the International Council of Science and the Committee on Data for Science and Technology, digital technology offers profound opportunities for science to discover unsuspected patterns and relationships in nature and society, on scales from the molecular to the cosmic, from local health systems to global sustainability. It has created the potential for disciplines of science to synergize into a holistic understanding of the complex challenges currently confronting humanity; the Sustainable Development Goals are a direct reflectance of this. Interdisciplinary is obtained with integration of data across relevant disciplines. However, a barrier to realization and exploitation of this potential arises from the incompatible data standards and nomenclatures used in different disciplines. Although the problem has been addressed by several initiatives, the following challenge still remains: to make online data integration a routine.
General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal
Take down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
<p>Climate change is expected to alter the occurrence of floods in high latitude countries; evidence of earlier spring floods and more frequent rainfall-driven floods has already been detected in Norway. While the state-of-the-art hydrological climate-impact model chain embeds explicit assumptions about stationarity, machine learning offers a complementary approach to hydrological climate-impact modelling by facilitating direct downscaling from large-scale atmospheric variables to streamflow, thus making downscaling and bias-correction implicit. While applications of machine learning algorithms for streamflow and flood modelling are well documented in the scientific literature, few studies have linked large-scale atmospheric variables directly to streamflow without including observed streamflow as part of the input variable selection. Such autoregressive models have limited application for climate-impact studies, as future streamflow is yet to observe. Furthermore, most studies linking large-scale atmospheric forcing to catchment response have focused on monthly, seasonal, or annual streamflow. This study presents the application of feed-forward and recurrent neural networks for daily streamflow and flood reconstruction from atmospheric reanalysis data with comparable spatiotemporal resolution to global climate model outputs. Two widely applied neural network types, namely multilayer perceptron (MLP) and long short-term memory (LSTM), were benchmarked against gradient boost regression tree models. Catchment-specific, physically-based input variable selections representing the dominant flood-drivers were identified for 27 catchments in Norway. The selected catchments have low degrees of basin development and anthropogenic influence so that the established statistical links only reflect the forcing-response relationship between the atmosphere and the catchments. Overall, the LSTM obtained the highest accuracy, with a median Nash Sutcliffe Efficiency (NSE) of 0.88 on the training set (1950-2000) and 0.76 on the testing set (2006-2010). However, the MLP proved more robust, with a smaller drop in NSE from training (0.76) to testing (0.72), indicating that further restricting the input variables based on hydrological theory and physical interpretability may increase the robustness of neural networks in the context of daily streamflow modelling. The median NSE of the regression tree models was lower on both the training set (0.73) and the testing set (0.66). The results point to the potential of neural networks for hydrological climate-impact modelling in catchments where both snowmelt and rainfall constitute flood-drivers in the present climate. This research provides a springboard for future studies employing neural networks for hydrological climate-impact modelling in high latitude countries. Future research should assess the potential for regionalization by including catchment characteristics through clustering techniques like Kohonen Self-Organizing Maps.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.