A two-component fully automated flood monitoring system is described and evaluated. This is a result of combining two individual flood services that are currently under development at DLR's (German Aerospace Center) Center for Satellite based Crisis Information (ZKI) to rapidly support disaster management activities. A first-phase monitoring component of the system systematically detects potential flood events on a continental scale using daily-acquired medium spatial resolution optical data from the Moderate Resolution Imaging Spectroradiometer (MODIS). A threshold set controls the activation of the second-phase crisis component of the system, which derives flood information at higher spatial detail using a Synthetic Aperture Radar (SAR) based satellite mission (TerraSAR-X). The proposed activation procedure finds use in the identification of flood situations in different spatial resolutions and in the time-critical and on demand programming of SAR satellite acquisitions at an early stage of an evolving flood situation. The automated processing chains of the MODIS (MFS) and the TerraSAR-X Flood Service (TFS) include data pre-processing, the computation and adaptation of global auxiliary data, thematic classification, and the subsequent dissemination of flood maps using an interactive web-client. The system is operationally demonstrated and evaluated via the monitoring two recent flood events in Russia 2013 and Albania/Montenegro 2013.
Place name extraction refers to the task of detecting precise location information in texts like microblogs. It is a vital task to assist disaster response, revealing where the damages are, where people need assistance, and where help can be found. All current approaches for extracting the place names from microblogs face crucial problems: rule-based methods do not generalize, gazetteer-based methods do not detect unknown multi-word place names, and machine learning methods lack sufficient data, which is costly to annotate on scale. We propose a hybrid method that avoids these problems, named GazPNE, which fuses rules, gazetteers, and deep learning methods to achieve state-of-the-art-performance without requiring any manually annotated data.Specifically, we utilize C-LSTM, a fusion of Convolutional and Long Short-Term Memory Neural Networks, to decide if an n-gram in a microblog text is a place name or not. The C-LSTM is trained on 4.6 million positive examples extracted from OpenStreetMap and GeoNames and 220 million negative examples synthesized by rules and evaluated on 4,500 disaster-related tweets, including 9,026 place names from three floods: 2016 in Louisiana (US), 2016 in Houston (US), and 2015 in Chennai (India). Our method improves the previous state-of-the-art by 6%, achieving an F1 of 0.86.
Abstract. Messages on social media can be an important source of information during crisis situations, be they short-term disasters or longer-term events like COVID-19. They can frequently provide details about developments much faster than traditional sources (e.g. official news) and can offer personal perspectives on events, such as opinions or specific needs. In the future, these messages can also serve to assess disaster risks. One challenge for utilizing social media in crisis situations is the reliable detection of informative messages in a flood of data. Researchers have started to look into this problem in recent years, beginning with crowd-sourced methods. Lately, approaches have shifted towards an automatic analysis of messages. In this review article, we present methods for the automatic detection of crisis-related messages (tweets) on Twitter. We start by showing the varying definitions of importance and relevance relating to disasters, as they can serve very different purposes. This is followed by an overview of existing, crisis-related social media data sets for evaluation and training purposes. We then compare approaches for solving the detection problem based (1) on filtering by characteristics like keywords and location, (2) on crowdsourcing, and (3) on machine learning techniques with regard to their focus, their data requirements, their technical prerequisites, their efficiency and accuracy, and their time scales. These factors determine the suitability of the approaches for different expectations, but also their limitations. We identify which aspects each of them can contribute to the detection of informative tweets, and which areas can be improved upon in the future.We point out particular challenges, such as the linguistic issues concerning this kind of data. Finally, we suggest future avenues of research, and show connections to related tasks, such as the subsequent semantic classification of tweets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.