This paper exploits an effective water extraction method using SAR imagery in preparation for flood mapping in unpredictable flood situations. The proposed method is based on the thresholding method using SAR amplitude, terrain information, and object-based classification techniques for noise removal. Since the water areas in SAR images have the lowest amplitude value, the thresholding method using SAR amplitude could effectively extract water bodies. However, the reflective properties of water areas in SAR imagery cannot distinguish the occluded areas caused by steep relief and they can be eliminated with terrain information. In spite of the thresholding method using SAR amplitude and terrain information, noises which interfered with users’ interpretation of water maps still remained and the object-based classification using an object size criterion was applied for the noise removal and the criterion was determined by a histogram-based technique. When only using SAR amplitude information, the overall accuracy was 83.67%. However, using SAR amplitude, terrain information and the noise removal technique, the overall classification accuracy over the study area turned out to be 96.42%. In particular, user accuracy was improved by 46.00%.
Aerial images are an outstanding option for observing terrain with their high-resolution (HR) capability. The high operational cost of aerial images makes it difficult to acquire periodic observation of the region of interest. Satellite imagery is an alternative for the problem, but low-resolution is an obstacle. In this study, we proposed a context-based approach to simulate the 10 m resolution of Sentinel-2 imagery to produce 2.5 and 5.0 m prediction images using the aerial orthoimage acquired over the same period. The proposed model was compared with an enhanced deep super-resolution network (EDSR), which has excellent performance among the existing super-resolution (SR) deep learning algorithms, using the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-squared error (RMSE). Our context-based ResU-Net outperformed the EDSR in all three metrics. The inclusion of the 60 m resolution of Sentinel-2 imagery performs better through fine-tuning. When 60 m images were included, RMSE decreased, and PSNR and SSIM increased. The result also validated that the denser the neural network, the higher the quality. Moreover, the accuracy is much higher when both denser feature dimensions and the 60 m images were used.
A large amount of information needs to be identified and produced during the process of promoting projects of interest. Thermal infrared (TIR) images are extensively used because they can provide information that cannot be extracted from visible images. In particular, TIR oblique images facilitate the acquisition of information of a building’s facade that is challenging to obtain from a nadir image. When a TIR oblique image and the 3D information acquired from conventional visible nadir imagery are combined, a great synergy for identifying surface information can be created. However, it is an onerous task to match common points in the images. In this study, a robust matching method of image pairs combined with different wavelengths and geometries (i.e., visible nadir-looking vs. TIR oblique, and visible oblique vs. TIR nadir-looking) is proposed. Three main processes of phase congruency, histogram matching, and Image Matching by Affine Simulation (IMAS) were adjusted to accommodate the radiometric and geometric differences of matched image pairs. The method was applied to Unmanned Aerial Vehicle (UAV) images of building and non-building areas. The results were compared with frequently used matching techniques, such as scale-invariant feature transform (SIFT), speeded-up robust features (SURF), synthetic aperture radar–SIFT (SAR–SIFT), and Affine SIFT (ASIFT). The method outperforms other matching methods in root mean square error (RMSE) and matching performance (matched and not matched). The proposed method is believed to be a reliable solution for pinpointing surface information through image matching with different geometries obtained via TIR and visible sensors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.