Forecasting flood inundation in urban areas is challenging due to the lack of validation data. Recent developments have led to new genres of data sources, such as images and videos from smartphones and CCTV cameras. If the reference dimensions of objects, such as bridges or buildings, in images are known, the images can be used to estimate water levels using computer vision algorithms. Such algorithms employ deep learning and edge detection techniques to identify the water surface in an image, which can be used as additional validation data for forecasting inundation. In this study, a methodology is presented for flood inundation forecasting that integrates validation data generated with the assistance of computer vision. Six equifinal models are run simultaneously, one of which is selected for forecasting based on a goodness-of-fit (least error), estimated using the validation data. Collection and processing of images is done offline on a regular basis or following a flood event. The results show that the accuracy of inundation forecasting can be improved significantly using additional validation data.
In recent times, frequent occurrences of natural disasters have been the cause of widespread disruptions to life and property. Albeit attempts to prevent such disasters may be a lost cause, emerging technologies can be resorted to, for minimization of their impact. This study proposes a deep learning-based computer vision and crowdsourcing methodology for the detection and estimation of flood depths, one of the most intense disruptive disasters. State-of-the-art flood detection systems work off of satellite or radar images. This research deals with processing images, captured at random, from flood ravaged zones, by smartphones or digital cameras. The crowdsourced image collection of the flood scenes afford better coverage and diverse perspectives, for assessments of the flood devastation. This paper proffers a fuzzy logic-based algorithm, and image segmentation based on color, to estimate the extent of flooding by analysis of crowdsourced images. Deployment of these methods helps in classification of the flooded areas into high, medium, or low level of flooding, to facilitate cost-effective, time-critical rescue operations. This algorithm yielded an accuracy of 83.1% on our dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.