Recent improvements in the frequency, type, and availability of satellite images mean it is now feasible to routinely study volcanoes in remote and inaccessible regions, including those with no ground‐based monitoring. In particular, Interferometric Synthetic Aperture Radar data can detect surface deformation, which has a strong statistical link to eruption. However, the data set produced by the recently launched Sentinel‐1 satellite is too large to be manually analyzed on a global basis. In this study, we systematically process >30,000 short‐term interferograms at over 900 volcanoes and apply machine learning algorithms to automatically detect volcanic ground deformation. We use a convolutional neutral network to classify interferometric fringes in wrapped interferograms with no atmospheric corrections. We employ a transfer learning strategy and test a range of pretrained networks, finding that AlexNet is best suited to this task. The positive results are checked by an expert and fed back for model updating. Following training with a combination of both positive and negative examples, this method reduced the number of interferograms to ∼100 which required further inspection, of which at least 39 are considered true positives. We demonstrate that machine learning can efficiently detect large, rapid deformation signals in wrapped interferograms, but further development is required to detect slow or small deformation patterns which do not generate multiple fringes in short duration interferograms. This study is the first to use machine learning approaches for detecting volcanic deformation in large data sets and demonstrates the potential of such techniques for developing alert systems based on satellite imagery.
This paper describes the application of machine learning techniques to develop a state-of-the-art detection and prediction system for spatiotemporal events found within remote sensing data; specifically, Harmful Algal Bloom events (HABs). We propose an HAB detection system based on: a ground truth historical record of HAB events, a novel spatiotemporal datacube representation of each event (from MODIS and GEBCO bathymetry data) and a variety of machine learning architectures utilising state-of-the-art spatial and temporal analysis methods based on Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) components together with Random Forest and Support Vector Machine (SVM) classification methods. This work has focused specifically on the case study of the detection of Karenia Brevis Algae (K. brevis) HAB events within the coastal waters of Florida (over 2850 events from 2003 to 2018; an order of magnitude larger than any previous machine learning detection study into HAB events). The development of multimodal spatiotemporal datacube data structures and associated novel machine learning methods give a unique architecture for the automatic detection of environmental events. Specifically, when applied to the detection of HAB events it gives a maximum detection accuracy of 91% and a Kappa coefficient of 0.81 for the Florida data considered. A HAB forecast system was also developed where a temporal subset of each datacube was used to predict the presence of a HAB in the future. This system was not significantly less accurate than the detection system being able to predict with 86% accuracy up to 8 days in the future.
Abstract-The segmentation of images into meaningful and homogenous regions is a key method for image analysis within applications such as content based retrieval. The watershed transform is a well established tool for the segmentation of images. However, watershed segmentation is often not effective for textured image regions that are perceptually homogeneous. In order to properly segment such regions the concept of the "texture gradient" is now introduced. Texture information and its gradient are extracted using a novel nondecimated form of a complex wavelet transform. A novel marker location algorithm is subsequently used to locate significant homogeneous textured or non textured regions. A marker driven watershed transform is then used to properly segment the identified regions. The combined algorithm produces effective texture and intensity based segmentation for the application to content based image retrieval.Index Terms-Image edge analysis, image segmentation, image texture analysis, wavelet transforms.
A perceptual image fusion method is proposed that employs explicit luminance and contrast masking models. These models are combined to give the perceptual importance of each coefficient produced by the dual-tree complex wavelet transform of each input image. This combined model of perceptual importance is used to select which coefficients are retained and furthermore to determine how to present the retained information in the most effective way. This paper is the first to give a principled approach to image fusion from a perceptual perspective. Furthermore, the proposed method is shown to give improved quantitative and qualitative results compared with previously developed methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.