Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively.
Among the different types of natural disasters, floods are the most devastating, widespread, and frequent. Floods account for approximately 30% of the total loss caused by natural disasters. Accurate flood-risk mapping is critical in reducing such damages by correctly predicting the extent of a flood when coupled with rain and stage gage data, supporting emergency-response planning, developing land use plans and regulations with regard to the construction of structures and infrastructures, and providing damage assessment in both spatial and temporal measurements. The reliability and accuracy of such flood assessment maps is dependent on the quality of the digital elevation model (DEM) in flood conditions. This study investigates the quality of an Unmanned Aerial Vehicle (UAV)-based DEM for spatial flood assessment mapping and evaluating the extent of a flood event in Princeville, North Carolina during Hurricane Matthew. The challenges and problems of on-demand DEM production during a flooding event were discussed. An accuracy analysis was performed by comparing the water surface extracted from the UAV-derived DEM with the water surface/stage obtained using the nearby US Geologic Survey (USGS) stream gauge station and LiDAR data.
Flooding occurs frequently and causes loss of lives, and extensive damages to infrastructure and the environment. Accurate and timely mapping of flood extent to ascertain damages is critical and essential for relief activities. Recently, deep-learningbased approaches, including convolutional neural network (CNN) has shown promising results for flood extent mapping. However, these methods cannot extract floods underneath vegetation canopy using the optical imagery. This article attempts to address this problem by introducing an integrated CNN and region growing (RG) method for the mapping of both visible and underneath vegetation flooded areas. The CNN-based classifier is used to extract flooded areas from the optical images, whereas, the RG method is applied to estimate the extent of floods underneath vegetation that are not visible from imagery using the digital elevation model. A data augmentation technique is applied for training the CNN-based classifier to improve the classification results. The results show that the data augmentation can enhance the accuracy of image classification and the proposed integrated method efficiently detects floods in both the visible and the areas covered by vegetation, which is essential to supporting effective flood emergency response and recovery activities. Index Terms-Convolutional neural network (CNN), flood mapping, LiDAR, region growing (RG), remote sensing. I. INTRODUCTIONF LOODING is one of the catastrophic and frequently occurring natural disasters that cause extensive damages to life, infrastructure, and the environment. In many countries, the severity and frequency of flooding have increased in recent years due to extreme weather such as hurricanes, and the expansion of urbanization. Generating accurate and timely inundation maps is essential for regional and federal agencies to manage rescue operations and assess damages effectively [1]-[2].
Abstract. This research examines the ability of deep learning methods for remote sensing image classification for agriculture applications. U-net and convolutional neural networks are fine-tuned, utilized and tested for crop/weed classification. The dataset for this study includes 60 top-down images of an organic carrots field, which was collected by an autonomous vehicle and labeled by experts. FCN-8s model achieved 75.1% accuracy on detecting weeds compared to 66.72% of U-net using 60 training images. However, the U-net model performed better on detecting crops which is 60.48% compared to 47.86% of FCN-8s.
Weeds are among the significant factors that could harm crop yield by invading crops and smother pastures, and significantly decrease the quality of the harvested crops. Herbicides are widely used in agriculture to control weeds; however, excessive use of herbicides in agriculture can lead to environmental pollution as well as yield reduction. Accurate mapping of crops/weeds is essential to determine weeds’ location and locally treat those areas. Increasing demand for flexible, accurate and lower cost precision agriculture technology has resulted in advancements in UAS-based remote sensing data collection and methods. Deep learning methods have been successfully employed for UAS data processing and mapping tasks in different domains. This research investigate, compares and evaluates the performance of deep learning methods for crop/weed discrimination on two open-source and published benchmark datasets captured by different UASs (field robot and UAV) and labeled by experts. We specifically investigate the following architectures: 1) U-Net Model 2) SegNet 3) FCN (FCN-32s, FCN-16s, FCN-8s) 4) DepLabV3+. The deep learning models were fine-tuned to classify the UAS datasets into three classes (background, crops, and weeds). The classification accuracy achieved by U-Net is 77.9% higher than 62.6% of SegNet, 68.4% of FCN-32s, 77.2% of FCN-16s, and slightly lower than 81.1% of FCN-8s, and 84.3% of DepLab v3+. Experimental results showed that the ResNet-18 based segmentation model such as DepLab v3+ could precisely extract weeds compared to other classifiers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.