Abstract. Semantic segmentation of remote sensing images with high spatial resolution has many applications in a wide range of problems in this field. In recent years, the use of advanced techniques based on fully convolutional neural networks have achieved high and impressive accuracies. However, the labels of different classes are estimated independently in this method. In general, the segmentation effect is too coarse to take the relationship between pixels into account. On the other hand, due to the use of convolution filters and limitations of calculations, the field of view information of these filters will be limited in deep layers. In this study, a method based on generative adversarial network (GAN) is proposed to strengthen spatial vicinity in the output segmentation map. The segmentation model receive assistance from the GAN model in the form of a higher order potential loss. Furthermore, for better stability and performance in model training the Wasserstein GAN is used for optimization of the model. We successfully show an increase in semantic segmentation accuracy using the challenging ISPRS Vaihingen benchmark dataset.
Abstract. This research evaluates the ability of thermal images obtained from aerial platforms to produce 3D point clouds. In this study, the thermal camera is first calibrated. Then, in order to avoid data redundancy, the key frames of the obtained thermal video are separated from other frames. Afterwards, the point clouds are generated and then the thermal ortho image is created from the key frames. The evaluation is done using visible orthophoto, ground control points and the linearity of the edges of buildings extracted from thermal images. The results of this study show that the thermal ortho image matches the visible ortho image with a good accuracy in the study area. Moreover, the standard deviation of the edges of the buildings has been calculated for a number of reconstructed buildings in thermal ortho with proper dispersion. 77% of the measurements taken from the edges of the buildings coincide with a straight line with an accuracy of better than two pixels, and about half of these values are extracted with an accuracy of better than a pixel.
Abstract. Building change detection in high resolution remote sensing images is one of the most important and applied topics in urban management and urban planning. Different environmental illumination conditions and registration problem are the most error resource in the bitemporal images that will cause pseudochanges in results. On the other hand, the use of deep learning technologies especially convolutional neural networks (CNNs) has been successful and considered, but usually causes the loss of shape and detail at the edges. Accordingly, we propose a W-shape ResUnet++ network in which images with different environmental conditions enter the network independently. ResUnet++ is a network with residual blocks, triple attention blocks and Atrous Spatial Pyramidal Pooling. ResUnet++ is used on both sides of the network to extract deeper and discriminator features. This improves the channel and spatial inter-dependencies, while at the same time reducing the computational cost. After that, the Euclidean distance between the features is computed and the deconvolution is done. Also, a dual loss function is designed that used the weighted binary cross entropy to solve the unbalance between the changed and unchanged data in change detection training data and in the second part, we used the mask–boundary consistency constraints that the condition of converging the edges of the training data and the predicted edge in the loss function has been added. We implemented the proposed method on two remote sensing datasets and then compared the results with state-of-the-art methods. The F1 score improved 1.52 % and 4.22 % by using the proposed model in the first and second dataset, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.