In relation to 3D bathymetric modelling, this article aims to analyze the performance of Kriging approaches in dependence of the location and density of the measured depth points. The experiments were carried out on a multi-beam sonar (MBS) dataset that includes 240,000 soundings covering a sea-bottom area near Giglio Island (Italy). Seven subsets were derived in random way from the initial regular MBS dataset, selecting an increasing number of points uniformly spaced. Seven models were generated for both Ordinary Kriging and Universal Kriging. Each model was submitted to leave-one-out cross-validation to define the exactness of the predictive values and compared with the initial grid to better evaluate the accuracy in dependence of the point number and dissemination. To investigate this relationship, a new index called MVI (Morphological Variation Index) was introduced as a measurement of the level of variation of seabed morphology. The results validate the efficiency of the Kriging methods and remark the influence of the dataset distribution on the 3D model, highlighting MVI as a useful index to represent the seabed variation as a unique value. Finally, in no rugged areas using 1 point every 1000 m2, the RMSE of the differences between measured and interpolated values falls below 1 m, while a further increment of soundings is required in the presence of a high level of variation of seabed morphology.
The monitoring of burned areas can easily be performed using satellite multispectral images: several indices are available in the literature for highlighting the differences between healthy vegetation areas and burned areas, in consideration of their different signatures. However, these indices may have limitations determined, for example, by the presence of clouds or water bodies that produce false alarms. To avoid these inaccuracies and optimize the results, this work proposes a new index for detecting burned areas named Normalized Burn Ratio Plus (NBR+), based on the involvement of Sentinel-2 bands. The efficiency of this index is verified by comparing it with five other existing indices, all applied on an area with a surface of about 500 km2 and covering the north-eastern part of Sicily (Italy). To achieve this aim, both a uni-temporal approach (single date image) and a bi-temporal approach (two date images) are adopted. The maximum likelihood classifier (MLC) is applied to each resulting index map to define the threshold separating burned pixels from non-burned ones. To evaluate the efficiency of the indices, confusion matrices are constructed and compared with each other. The NBR+ shows excellent results, especially because it excludes a large part of the areas incorrectly classified as burned by other indices, despite being clouds or water bodies.
Pan-sharpening methods allow the transfer of higher resolution panchromatic images to multispectral ones concerning the same scene. Different approaches are available in the literature, and only a part of these approaches is included in remote sensing software for automatic application. In addition, the quality of the results supplied by a specific method varies according to the characteristics of the scene; for consequence, different algorithms must be compared to find the best performing one. Nevertheless, pan-sharpening methods can be applied using GIS basic functions in the absence of specific pan-sharpening tools, but this operation is expensive and time-consuming. This paper aims to explain the approach implemented in Quantum GIS (QGIS) for automatic pan-sharpening of Pléiades images. The experiments are carried out on data concerning the Greek island named Lesbo. In total, 14 different pan-sharpening methods are applied to reduce pixel dimensions of the four multispectral bands from 2 m to 0.5 m. The automatic procedure involves basic functions already included in GIS software; it also permits the evaluation of the quality of the resulting images supplying the values of appropriate indices. The results demonstrate that the approach provides the user with the highest performing method every time, so the best possible fused products are obtained with minimal effort in a reduced timeframe.
Spatial interpolation, or the estimation of the variables at unobserved locations in geographic space based on the values at observed locations, is fundamental in all geophysical sciences, first of all for the construction of digital elevation model (DEM). Several methods are available in literature for spatial interpolation and the choice of the most suitable of them for building DEM, depends on many factors, particularly on the distribution of the sampled points, therefore, on the morphology of the area to be mapped. This paper aims to choose the most appropriate interpolators for DEM production, by comparing different methods usually available in GIS software. For the purpose of developing the best performing model and comparing interpolators, a set of elevation data collected by digital vector map is used. The accuracy of interpolation methods is tested by analyzing 4 statistic parameters, which are achieved by cross-validation leave-one-out. Particularly, minimum, maximum, mean and root mean square error (RMSE) are calculated for each interpolation method considering the residual in each sampling point between measured and interpolated value.
<p class="Abstract">Electronic Navigational Charts (ENCs), official databases created by a national hydrographic office and included in Electronic Chart Display and Information System (ECDIS), supply, among essential indications for safe navigation, data about sea-bottom morphology in terms of depth points and isolines. Those data are very useful to build bathymetric 3D models: applying interpolation methods, it is possible to produce a continuous representation of the seafloor for supporting studies concerning different aspects of a marine area, such as directions and intensity of currents, sensitivity of habitats and species, etc. Many interpolation methods are available in literature for bathymetric data modelling: among them kriging ones are extremely performing, but require deep analysis to define input parameters, i.e. semi-variogram models. This paper aims to analyze kriging approaches for depth data concerning the Bay of Pozzuoli. The attention is focused on the role of semi-variogram models for Ordinary and Universal kriging. Depth data included in two ENCs, namely IT400129 and IT400130, are processed using Geostatistical Analyst, an extension of ArcGIS 10.3.1 (ESRI). The results testify the relevance of the choice of the mathematical functions of the semi-variogram: Stable Model supplies, for this case study, the best performance in terms of depth accuracy for both Ordinary and Universal kriging.</p>
The "Istituto Idrografico della Marina Militare" (IIMM) secures the Italian hydrographic service through the execution of bathymetric surveys, the production of nautical charts and publications, and the dissemination of nautical information, aimed at the safety of navigation and of human life at sea. The datasets of depth points acquired with bathymetric surveys are useful to model sea-bottom starting from the interpolation methods available in Geographic Information System (GIS) software. In this paper, a single-beam dataset from IIMM is used with the aim to compare different interpolation methods for sea bottom GIS modelling. The study area is the sea close to the east coast of Isola del Giglio in Tuscan Archipelago (Italy). The following nine different interpolation methods are selected and applied using ArcGIS software, version 10.3.1: Inverse Distance Weighting (IDW), Local Polynomial Interpolations of different orders (from the first to the fifth order), Ordinary Kriging, with three different Variogram models (Gaussian, circular, exponential). The result accuracy is tested via cross-validation leave-one-out, so statistical values (minimum, maximum, mean and root mean square error) are calculated for each interpolation method, taking into account the residual given for each sampling point between measured and interpolated value. Finally, a 3D model is created from the best interpolating algorithm. The results remark the role of the cross validation as preliminary way to select the most preforming interpolation method that is difficult to identify in other way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.