Hyperspectral Image (HSI) classification methods that use Deep Learning (DL) have proven to be effective in recent years. In particular, Convolutional Neural Networks (CNNs) have demonstrated extremely powerful performance in such tasks. However, the lack of training samples is one of the main contributors to low classification performance. Traditional CNN-based techniques under-utilize the inter-band correlations of HSI because they primarily use 2D-CNNs for feature extraction. Contrariwise, 3D-CNNs extract both spectral and spatial information using the same operation. While this overcomes the limitation of 2D-CNNs, it may lead to insufficient extraction of features. In order to overcome this issue, we propose an HSI classification approach named Tri-CNN which is based on a multi-scale 3D-CNN and three-branch feature fusion. We first extract HSI features using 3D-CNN at various scales. The three different features are then flattened and concatenated. To obtain the classification results, the fused features then traverse a number of fully connected layers and eventually a softmax layer. Experimental results are conducted on three datasets, Pavia University (PU), Salinas scene (SA) and GulfPort (GP) datasets, respectively. Classification results indicate that our proposed methodology shows remarkable performance in terms of the Overall Accuracy (OA), Average Accuracy (AA), and Kappa metrics when compared against existing methods.
Ship detection from remote sensing images has been a topic of interest that gradually gained attention over the years due to the wide variety of its applications in the field of maritime surveillance, such as oil discharge control, sea pollution monitoring, and harbour management. Even though there is an extensive amount of methods developed for ship detection, there are still several challenges that remain unsolved, especially in complex environments. These challenges include occlusions due to shadows, clouds, and fog. Nowadays, deep learning algorithms, especially Deep Convolutional Neural Networks (DCNNs), are considered as a powerful approach for automatically detecting ships in satellite imagery. In this paper, enhanced Faster R-CNN (FRCNN) model will be used to overcome the aforementioned unsolved challenges. The enhanced FRCNN, which combines high level features with low level features, will be trained and tested in the frequency domain using the publicly available satellite imagery dataset, Airbus Ship Detection, provided by Kaggle. The performance will be compared to the original FRCNN based on their Overall Accuracy (OA) and Mean Average Precision (mAP) metrics.
Nowadays, satellite images are used in various governmental applications, such as urbanization and monitoring the environment. Spatial resolution is an element of crucial impact on the usage of remote sensing imagery. As such, increasing the spatial resolution of an image is an important pre-processing step that can improve the performance of various image processing tasks, such as segmentation. Once a satellite is launched, the more practical solution to improve the resolution of its captured images is to use Single Image Super Resolution (SISR) techniques. In the recent years, Deep Convolutional Neural Networks (DCNNs) have been recognized as a highly effective tool to reconstruct a High Resolution (HR) image from its Low Resolution (LR) counterpart, which is an open problem due to the inherent difficulty of estimating the missing high frequency components. The aim of this research paper is to design and implement a satellite image SISR algorithm by estimating high frequency details through training Deep Convolutional Neural Network (DCNNs) with respect to wavelet analysis. The goal is to improve the spatial resolution of multispectral remote sensing images captured by DubaiSat-2 satellite. The accuracy of the proposed algorithm is assessed using several metrics such as Peak Signal-to-Noise Ratio (PSNR), Wavelet-based Signal-to-Noise Ratio (WSNR) and Structural Similarity Index Measurement (SSIM).
Traffic accidents impose significant problems in our daily life due to the huge social, environmental, and economic expenses associated with them. The rapid development in data science, geographic data collection, and processing methods encourage researchers to evaluate, delineate traffic accident hotspots, and to effectively predict and estimate traffic accidents. In this study, Kaggle traffic accidents dataset that covers United Kingdom for the time period between 2012-2014 will be investigated. Our methodology consists of three main techniques. First, Morans I method of spatial autocorrelation, and Getis-Ord Gi* statistics will be used to examine and relate traffic accidents dataset in terms of spatial and temporal features. Second, weighted features will be used as inputs for Deep Feedforward Neural Network (DFFNN). Finally, the performance of the proposed DFFNN will be evaluated based on its accuracy, misclassification rate, precision, prevalence, histogram of errors, and confusion matrix. These evaluation metrics are then used as a comparison basis against the performance of Support Vector Machine (SVM). The results will focus on using spatial statistics techniques to effectively weight different features according to their contribution to traffic accidents. Consequently, the output of the DFFNN asserts the likelihood of accident occurrence given a certain location. Furthermore, it would be beneficial to investigate whether these accidents exhibit certain timely patterns, such as certain days or months where accidents potentially occur more frequently. The proposed method can be effectively used by different authorities to implement an improved planning and management approaches for traffic accident reduction. Moreover, it can identify and locate road risk segments where immediate action should be considered.
Machine learning and computer vision algorithms can provide a precise and automated interpretation of medical videos. The segmentation of the left ventricle of echocardiography videos plays an essential role in cardiology for carrying out clinical cardiac diagnosis and monitoring the patient’s condition. Most of the developed deep learning algorithms for video segmentation require an enormous amount of labeled data to generate accurate results. Thus, there is a need to develop new semi-supervised segmentation methods due to the scarcity and costly labeled data. In recent research, semi-supervised learning approaches based on graph signal processing emerged in computer vision due to their ability to avail the geometrical structure of data. Video object segmentation can be considered as a node classification problem. In this paper, we propose a new approach called GraphECV based on the use of graph signal processing for semi-supervised learning of video object segmentation applied for the segmentation of the left ventricle in echordiography videos. GraphECV includes instance segmentation, extraction of temporal, texture and statistical features to represent the nodes, construction of a graph using K-nearest neighbors, graph sampling to embed the graph with small amount of labeled nodes or graph signals, and finally a semi-supervised learning approach based on the minimization of the Sobolov norm of graph signals. The new algorithm is evaluated using two publicly available echocardiography videos, EchoNet-Dynamic and CAMUS datasets. The proposed approach outperforms other state-of-the-art methods under challenging background conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.