Nowadays, satellite images are used in various governmental applications, such as urbanization and monitoring the environment. Spatial resolution is an element of crucial impact on the usage of remote sensing imagery. As such, increasing the spatial resolution of an image is an important pre-processing step that can improve the performance of various image processing tasks, such as segmentation. Once a satellite is launched, the more practical solution to improve the resolution of its captured images is to use Single Image Super Resolution (SISR) techniques. In the recent years, Deep Convolutional Neural Networks (DCNNs) have been recognized as a highly effective tool to reconstruct a High Resolution (HR) image from its Low Resolution (LR) counterpart, which is an open problem due to the inherent difficulty of estimating the missing high frequency components. The aim of this research paper is to design and implement a satellite image SISR algorithm by estimating high frequency details through training Deep Convolutional Neural Network (DCNNs) with respect to wavelet analysis. The goal is to improve the spatial resolution of multispectral remote sensing images captured by DubaiSat-2 satellite. The accuracy of the proposed algorithm is assessed using several metrics such as Peak Signal-to-Noise Ratio (PSNR), Wavelet-based Signal-to-Noise Ratio (WSNR) and Structural Similarity Index Measurement (SSIM).
Machine learning and computer vision algorithms can provide a precise and automated interpretation of medical videos. The segmentation of the left ventricle of echocardiography videos plays an essential role in cardiology for carrying out clinical cardiac diagnosis and monitoring the patient’s condition. Most of the developed deep learning algorithms for video segmentation require an enormous amount of labeled data to generate accurate results. Thus, there is a need to develop new semi-supervised segmentation methods due to the scarcity and costly labeled data. In recent research, semi-supervised learning approaches based on graph signal processing emerged in computer vision due to their ability to avail the geometrical structure of data. Video object segmentation can be considered as a node classification problem. In this paper, we propose a new approach called GraphECV based on the use of graph signal processing for semi-supervised learning of video object segmentation applied for the segmentation of the left ventricle in echordiography videos. GraphECV includes instance segmentation, extraction of temporal, texture and statistical features to represent the nodes, construction of a graph using K-nearest neighbors, graph sampling to embed the graph with small amount of labeled nodes or graph signals, and finally a semi-supervised learning approach based on the minimization of the Sobolov norm of graph signals. The new algorithm is evaluated using two publicly available echocardiography videos, EchoNet-Dynamic and CAMUS datasets. The proposed approach outperforms other state-of-the-art methods under challenging background conditions.
Automatic ship segmentation from high-resolution Synthetic Aperture Radar (SAR) remote sensing images has been a topic of interest that has gradually gained attention over the years due to the abundance of earth observation sensors. Recently, deep learning methods have provided a breakthrough increasing the performance greatly by using large amount of labeled data. Yet, the high cost related to the samples labeling and their scarcity result in significant limitation of their wide use. Therefore, it is crucial to overcome the unlabeled inputs challenge and develop semi-supervised learning approaches to enhance the machine learning models capacity. Our letter proposes a semi-supervised segmentation algorithm for SAR images named SemiSegSAR based on the use of Graph Signal Processing. This method includes instance segmentation; texture and statistical SAR features to represent the nodes of the graph; K-nearest neighbors to construct the graph; and Sobolev minimization algorithm to tackle the problem of semi-supervised semantic segmentation. The proposed algorithm is trained and tested using the publicly available SSDD and HRSID ship detection datasets. Experiments show that SemiSegSAR outperforms the current state-of-the-art semi-supervised and supervised methods while requiring only few labeled data.
The use of remote sensing in archaeological research allows the prospection of sub-surfaces in arid regions nonintrusively before the on-site investigation and excavation. While the actual detection method of expected buried archaeological structures is based on visual interpretation, this work provides a supporting archaeological guidance using remote sensing. The aim is to detect potential archaeological remains underneath the sand. This paper focuses on Saruq Al-Hadid surroundings, which is an archaeologist site discovered in 2002, located about 50 km southeast of Dubai, as archaeologists believe that other archaeological sites are potentially buried in the surroundings. The input data is derived from a combination of wavelength L-band Synthetic Aperture Radar (ALOS PALSAR), which is able to penetrate the sand, and multispectral optical images (Landsat 7). This paper develops a new strategy to help in the detection of suspected buried structures. The data fusion of surface roughness and spectral indices enables tackling the well-known limitation of SAR images and offers a set of pixels having an archaeological signature different from the manmade structures. The potential buried sites are then classified by performing a pixel-level unsupervised classification algorithm such as K-means cluster analysis. To test the performance of the proposed method, the results are compared with those obtained by visual interpretation.
The analysis of facial appearance is significant to an early diagnosis of medical genetic diseases. The fast development of image processing and machine learning techniques facilitates the detection of facial dysmorphic features. This paper is a survey of the recent studies developed for the screening of genetic abnormalities across the facial features obtained from two dimensional and three dimensional images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.