Abstract:One of the grand challenges of disaster management is for stakeholders to be able to discover, access, integrate and analyze task-appropriate disaster data together with their associated algorithms and work-flows. Even with a growing number of initiatives to publish disaster data using open principles, integration and reuse are still difficult due to existing interoperability barriers within datasets. Several frameworks for assessing data interoperability exist but do not generate best practice solutions to ex… Show more
“…In particular, big data potential can only be achieved if legal, organizational, semantic, and technical interoperability is reached [143]. In particular, some researchers report [144] that while technical interoperability has reached a high level of maturity, semantic and legal interoperability remains a significant barrier for the sector. Future work should be carried out to address semantic interoperability, taking into account existing standards, such as OASIS Emergency Data Exchange Language (EDXL) Emergency Standards [145], and semantic interoperability based on ontologies [146,147] to exploit the potential of disaster knowledge graphs [148].…”
Nowadays, we are witnessing a shift in the way emergencies are being managed. On the one hand, the availability of big data and the evolution of geographical information systems make it possible to manage and process large quantities of information that can hugely improve the decision-making process. On the other hand, digital humanitarianism has shown to be very beneficial for providing support during emergencies. Despite this, the full potential of combining automatic big data processing and digital humanitarianism approaches has not been fully realized, though there is an initial body of research. This paper aims to provide a reference architecture for emergency management that instantiates the NIST Big Data Reference Architecture to provide a common language and enable the comparison of solutions for solving similar problems.
“…In particular, big data potential can only be achieved if legal, organizational, semantic, and technical interoperability is reached [143]. In particular, some researchers report [144] that while technical interoperability has reached a high level of maturity, semantic and legal interoperability remains a significant barrier for the sector. Future work should be carried out to address semantic interoperability, taking into account existing standards, such as OASIS Emergency Data Exchange Language (EDXL) Emergency Standards [145], and semantic interoperability based on ontologies [146,147] to exploit the potential of disaster knowledge graphs [148].…”
Nowadays, we are witnessing a shift in the way emergencies are being managed. On the one hand, the availability of big data and the evolution of geographical information systems make it possible to manage and process large quantities of information that can hugely improve the decision-making process. On the other hand, digital humanitarianism has shown to be very beneficial for providing support during emergencies. Despite this, the full potential of combining automatic big data processing and digital humanitarianism approaches has not been fully realized, though there is an initial body of research. This paper aims to provide a reference architecture for emergency management that instantiates the NIST Big Data Reference Architecture to provide a common language and enable the comparison of solutions for solving similar problems.
“…Hazard/disaster-related data that is already available is frequently geographically dispersed and stored by a variety of organisations, making it challenging to acquire and use for disaster management objectives. [22].…”
Globally, the rise of disasters has caused billions of dollars lost each year. These include the loss of properties, life and has created a negative impact on socioeconomic level of a country. Currently, geospatial datasets are becoming crucial for situational awareness and management of disasters. The timely and accurate information on disastrous occurrences must be collected, maintained, and managed for efficient management of emergency. These geospatial datasets are from different data provider agencies. Thus, there is a need to focus on the geospatial data sharing that would benefits the authorities in decision making. This initiative entails high commitment and collaboration from the data provider agencies, which can be achieved through the sharing of geospatial datasets approach. This study aims to identify the critical success factors of geospatial data sharing in the context of natural disaster. A preliminary review, focus group discussion and interviews were conducted to get insights of the subject being studied. The findings revealed that there are thirteen (13) critical success factors for geospatial data sharing in disaster management. Technology, Organisation, Social, Environment, Ecology and Economy are the dimensions identified and mapped accordingly to the thirteen critical success factors.
Access to integrated disaster-related data through querying is still a problem due to associated semantic barriers. The disaster domain largely relies on the top-down approach of ontology development. This limits reuse due to associated commitments and complex alignments within ontologies. Therefore, there is a need to utilize a bottom-up approach that reuses patterns for representing disaster knowledge. To bridge the availability gap of patterns for representing disaster knowledge, this study identifies existing and emerging patterns for reuse while organizing disaster data from multiple sector stakeholders. Based on the eXtreme Design (XD) methodology and key informant interviews, competency questions (CQs) were elicited from domain stakeholders. The CQs are matched with existing patterns from other contexts. Emerging patterns (e.g the Event Classification and Quality Dependence Description for Objects) are also developed for CQs not captured and subsequently tested using SPARQL queries characterising the CQs. It is in this context that this paper presents a characterisation of disaster risk knowledge using CQs and corresponding patterns (reusable and emerging) covering the knowledge. Accordingly, we illustrate a pattern-driven use case to organise drought hazard data for early warning purposes. This provides a powerful use case for adopting a pattern-based approach to knowledge representation in the disaster domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.