“…Deep learning is widely used in environmental remote sensing, such as land use extraction, land cover change analysis [5,6], remote sensing image classification [7,8], and object detection [9][10][11]. The deep-learning models commonly used in road extraction are convolutional neural networks (CNNs) [12], whose network structure is often used for various computer-vision tasks, and semantic segmentation technology [13][14][15][16][17][18] is another area of great research interest in image interpretation.…”
Automatic road extraction from very-high-resolution remote sensing images has become a popular topic in a wide range of fields. Convolutional neural networks are often used for this purpose. However, many network models do not achieve satisfactory extraction results because of the elongated nature and varying sizes of roads in images. To improve the accuracy of road extraction, this paper proposes a deep learning model based on the structure of Deeplab v3. It incorporates squeeze-and-excitation (SE) module to apply weights to different feature channels, and performs multi-scale upsampling to preserve and fuse shallow and deep information. To solve the problems associated with unbalanced road samples in images, different loss functions and backbone network modules are tested in the model’s training process. Compared with cross entropy, dice loss can improve the performance of the model during training and prediction. The SE module is superior to ResNext and ResNet in improving the integrity of the extracted roads. Experimental results obtained using the Massachusetts Roads Dataset show that the proposed model (Nested SE-Deeplab) improves F1-Score by 2.4% and Intersection over Union by 2.0% compared with FC-DenseNet. The proposed model also achieves better segmentation accuracy in road extraction compared with other mainstream deep-learning models including Deeplab v3, SegNet, and UNet.
“…Deep learning is widely used in environmental remote sensing, such as land use extraction, land cover change analysis [5,6], remote sensing image classification [7,8], and object detection [9][10][11]. The deep-learning models commonly used in road extraction are convolutional neural networks (CNNs) [12], whose network structure is often used for various computer-vision tasks, and semantic segmentation technology [13][14][15][16][17][18] is another area of great research interest in image interpretation.…”
Automatic road extraction from very-high-resolution remote sensing images has become a popular topic in a wide range of fields. Convolutional neural networks are often used for this purpose. However, many network models do not achieve satisfactory extraction results because of the elongated nature and varying sizes of roads in images. To improve the accuracy of road extraction, this paper proposes a deep learning model based on the structure of Deeplab v3. It incorporates squeeze-and-excitation (SE) module to apply weights to different feature channels, and performs multi-scale upsampling to preserve and fuse shallow and deep information. To solve the problems associated with unbalanced road samples in images, different loss functions and backbone network modules are tested in the model’s training process. Compared with cross entropy, dice loss can improve the performance of the model during training and prediction. The SE module is superior to ResNext and ResNet in improving the integrity of the extracted roads. Experimental results obtained using the Massachusetts Roads Dataset show that the proposed model (Nested SE-Deeplab) improves F1-Score by 2.4% and Intersection over Union by 2.0% compared with FC-DenseNet. The proposed model also achieves better segmentation accuracy in road extraction compared with other mainstream deep-learning models including Deeplab v3, SegNet, and UNet.
“…In practical applications, usually each remote sensing image block has a great similarity with its neighboring image blocks. Inspired by this, we added local constraints on the basis of the sparse constraints of formula (9). The introduction of local constraints emphasizes that local constraints are more important than sparse constraints, which is consistent with the conclusion of locality constraint linear coding (LLC) [41].…”
Section: A Spatial Spectrum Feature Learning and Time-varying Featurmentioning
confidence: 81%
“…However, due to the difference in the appearance of the target and the interference of the complex background and noise in the remote sensing image, in the remote sensing image with high spatial resolution, target detection is usually difficult. In order to detect targets in remote sensing images, many studies have been conducted [8][9][10]. All these work are focused on two issues, which are the characteristics to choose the target and how to efficiently select the region.…”
The current research on multiple information fusion of remote sensing images is mainly aimed at remote sensing images of specific satellite sensors, and cannot be extended to other types of data source images. For high-resolution remote sensing images, when its surface coverage changes significantly, most of the mainstream algorithms are difficult to restore satisfactorily. The algorithm proposed in this paper combines the sparse representation and the spectral, spatial, and temporal features of remote sensing images for the first time to solve the above problems. The algorithm proposed in this paper first simulates the human visual mechanism, and obtains the spatial, spectral, and temporal features of the remote sensing image through the spatial spectral dictionary learning and the time-varying weight learning model. Secondly, local constraints are added to the extraction of temporal features to obtain temporal and geographical change information of heterogeneous remote sensing images. Then, a sparse representation model combining space-spectrum-time features is proposed to extract features of high-resolution remote sensing images. Finally, based on the VGG-16 network, this paper proposes a target recognition network with deep fully convolutional network, and uses the extracted feature map as the input of the target recognition network to realize the target recognition of the remote sensing image. Experimental results show that the method proposed in this paper can improve the accuracy of target recognition and improve the accuracy of recognition.
“…FireCast is different from past wildfire spread prediction research because of the incorporation of deep supervised machine learning methods in a unique model structure. Recently, Convolutional Neural Networks (CNN) have been used with remotely sensed data for different tasks, and show powerful capabilities [Ding et al, 2018;Maggiori et al, 2017;Zhang et al, 2016]. Therefore, we implement the 2D CNN detailed in Table 1, which is used for supervised learning from the various visual inputs explained further in §4, and is implemented using Keras with the TensorFlow backend.…”
Destructive wildfires result in billions of dollars in damage each year and are expected to increase in frequency, duration, and severity due to climate change. The current state-of-the-art wildfire spread models rely on mathematical growth predictions and physics-based models, which are difficult and computationally expensive to run. We present and evaluate a novel system, FireCast. FireCast combines artificial intelligence (AI) techniques with data collection strategies from geographic information systems (GIS). FireCast predicts which areas surrounding a burning wildfire have high-risk of near-future wildfire spread, based on historical fire data and using modest computational resources. FireCast is compared to a random prediction model and a commonly used wildfire spread model, Farsite, outperforming both with respect to total accuracy, recall, and F-score.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.