Hyperspectral Image (HSI) classification methods that use Deep Learning (DL) have proven to be effective in recent years. In particular, Convolutional Neural Networks (CNNs) have demonstrated extremely powerful performance in such tasks. However, the lack of training samples is one of the main contributors to low classification performance. Traditional CNN-based techniques under-utilize the inter-band correlations of HSI because they primarily use 2D-CNNs for feature extraction. Contrariwise, 3D-CNNs extract both spectral and spatial information using the same operation. While this overcomes the limitation of 2D-CNNs, it may lead to insufficient extraction of features. In order to overcome this issue, we propose an HSI classification approach named Tri-CNN which is based on a multi-scale 3D-CNN and three-branch feature fusion. We first extract HSI features using 3D-CNN at various scales. The three different features are then flattened and concatenated. To obtain the classification results, the fused features then traverse a number of fully connected layers and eventually a softmax layer. Experimental results are conducted on three datasets, Pavia University (PU), Salinas scene (SA) and GulfPort (GP) datasets, respectively. Classification results indicate that our proposed methodology shows remarkable performance in terms of the Overall Accuracy (OA), Average Accuracy (AA), and Kappa metrics when compared against existing methods.
Ship detection from remote sensing images has been a topic of interest that gradually gained attention over the years due to the wide variety of its applications in the field of maritime surveillance, such as oil discharge control, sea pollution monitoring, and harbour management. Even though there is an extensive amount of methods developed for ship detection, there are still several challenges that remain unsolved, especially in complex environments. These challenges include occlusions due to shadows, clouds, and fog. Nowadays, deep learning algorithms, especially Deep Convolutional Neural Networks (DCNNs), are considered as a powerful approach for automatically detecting ships in satellite imagery. In this paper, enhanced Faster R-CNN (FRCNN) model will be used to overcome the aforementioned unsolved challenges. The enhanced FRCNN, which combines high level features with low level features, will be trained and tested in the frequency domain using the publicly available satellite imagery dataset, Airbus Ship Detection, provided by Kaggle. The performance will be compared to the original FRCNN based on their Overall Accuracy (OA) and Mean Average Precision (mAP) metrics.
Nowadays, satellite images are used in various governmental applications, such as urbanization and monitoring the environment. Spatial resolution is an element of crucial impact on the usage of remote sensing imagery. As such, increasing the spatial resolution of an image is an important pre-processing step that can improve the performance of various image processing tasks, such as segmentation. Once a satellite is launched, the more practical solution to improve the resolution of its captured images is to use Single Image Super Resolution (SISR) techniques. In the recent years, Deep Convolutional Neural Networks (DCNNs) have been recognized as a highly effective tool to reconstruct a High Resolution (HR) image from its Low Resolution (LR) counterpart, which is an open problem due to the inherent difficulty of estimating the missing high frequency components. The aim of this research paper is to design and implement a satellite image SISR algorithm by estimating high frequency details through training Deep Convolutional Neural Network (DCNNs) with respect to wavelet analysis. The goal is to improve the spatial resolution of multispectral remote sensing images captured by DubaiSat-2 satellite. The accuracy of the proposed algorithm is assessed using several metrics such as Peak Signal-to-Noise Ratio (PSNR), Wavelet-based Signal-to-Noise Ratio (WSNR) and Structural Similarity Index Measurement (SSIM).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.