Earth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for RGBN (RGB + Near-infrared) bands is 10 m, which is more than enough for many tasks but falls short for many others. For this reason, if their spatial resolution could be enhanced without additional costs, any posterior analyses based on these images would be benefited. Previous works have mainly focused on increasing the resolution of lower resolution bands of Sentinel-2 (20 m and 60 m) to 10 m resolution. In these cases, super-resolution is supported by bands captured at finer resolutions (RGBN at 10 m). On the contrary, this paper focuses on the problem of increasing the spatial resolution of 10 m bands to either 5 m or 2.5 m resolutions, without having additional information available. This problem is known as single-image super-resolution. For standard images, deep learning techniques have become the de facto standard to learn the mapping from lower to higher resolution images due to their learning capacity. However, super-resolution models learned for standard images do not work well with satellite images and hence, a specific model for this problem needs to be learned. The main challenge that this paper aims to solve is how to train a super-resolution model for Sentinel-2 images when no ground truth exists (Sentinel-2 images at 5 m or 2.5 m). Our proposal consists of using a reference satellite with a high similarity in terms of spectral bands with respect to Sentinel-2, but with higher spatial resolution, to create image pairs at both the source and target resolutions. This way, we can train a state-of-the-art Convolutional Neural Network to recover details not present in the original RGBN bands. An exhaustive experimental study is carried out to validate our proposal, including a comparison with the most extended strategy for super-resolving Sentinel-2, which consists in learning a model to super-resolve from an under-sampled version at either 40 m or 20 m to the original 10 m resolution and then, applying this model to super-resolve from 10 m to 5 m or 2.5 m. Finally, we will also show that the spectral radiometry of the native bands is maintained when super-resolving images, in such a way that they can be used for any subsequent processing as if they were images acquired by Sentinel-2.
Permanent crops, such as olive groves, vineyards and fruit trees, are important in European agriculture because of their spatial and economic relevance. Agricultural geographical databases (AGDBs) are commonly used by public bodies to gain knowledge of the extension covered by these crops and to manage related agricultural subsidies and inspections. However, the updating of these databases is mostly based on photointerpretation, and thus keeping this information up-to-date is very costly in terms of time and money. This paper describes a methodology for automatic detection of uprooted orchards (parcels where fruit trees have been eliminated) based on the textural classification of orthophotos with a spatial resolution of 0.25 m. The textural features used for this classification were derived from the grey level co-occurrence matrix (GLCM) and wavelet transform, and were selected through principal components (PCA) and separability analyses. Next, a Discriminant Analysis classification algorithm was used to detect uprooted orchards. Entropy, contrast and correlation were found to be the most informative textural features obtained from the co-occurrence matrix. The minimum and standard deviation in plane 3 were the selected features based on wavelet transform. The classification based on these features achieved a true positive rate (TPR) of over 80% and an accuracy (A) of over 88%. As a result, this methodology enabled reducing the number of fields to photointerpret by 60-85%, depending on the membership threshold value selected. The proposed approach could be easily adopted by different stakeholders and could increase significantly the efficiency of agricultural database updating tasks.
Abstract. Copernicus program via its Sentinel missions is making earth observation more accessible and affordable for everybody. Sentinel-2 images provide multi-spectral information every 5 days for each location. However, the maximum spatial resolution of its bands is 10m for RGB and near-infrared bands. Increasing the spatial resolution of Sentinel-2 images without additional costs, would make any posterior analysis more accurate. Most approaches on super-resolution for Sentinel-2 have focused on obtaining 10m resolution images for those at lower resolutions (20m and 60m), taking advantage of the information provided by bands of finer resolutions (10m). Otherwise, our focus is on increasing the resolution of the 10m bands, that is, super-resolving 10m bands to 2.5m resolution, where no additional information is available. This problem is known as single-image super-resolution and deep learning-based approaches have become the state-of-the-art for this problem on standard images. Obviously, models learned for standard images do not translate well to satellite images. Hence, the problem is how to train a deep learning model for super-resolving Sentinel-2 images when no ground truth exist (Sentinel-2 images at 2.5m). We propose a methodology for learning Convolutional Neural Networks for Sentinel-2 image super-resolution making use of images from other sensors having a high similarity with Sentinel-2 in terms of spectral bands, but greater spatial resolution. Our proposal is tested with a state-of-the-art neural network showing that it can be useful for learning to increase the spatial resolution of RGB and near-infrared bands of Sentinel-2.
In the framework of the Copernicus Emergency Management Service (EMS) Mapping Validation, the applicability of the MultiTemporal Coherence (MTC) technique using Sentinel-1 data and the software made available by the European Space Agency (ESA), the Sentinel Application Platform (SNAP), for the detection and delineation of burnt areas was tested. The main purpose of the study was to test a methodology that would benefit from the advantages of delineating burnt areas based on radar data with respect to optical data due to its capacity to acquire data both night and day and to avoid the interference of clouds and/or smoke. Moreover, the study aimed to acheive the delineation of the burnt areas using Sentinel-1 and SNAP in the frame of an emergency mapping where processing time is constrained due to the necessity of giving a quick response to the emergency. Four Sentinel-1 images were acquired over a mountainous area mainly covered by Mediterranean vegetation that suffered from massive forest fires in the summer of 2016. The burnt area delineation was obtained by an object-based image analysis (OBIA) of the resulting MTC image followed by a visual inspection. The effects of the polarization, the acquisition mode, and the incidence angle of the synthetic aperture radar (SAR) imagery were studied in order to assess the contribution of these sensor varaibles on the results. Results of the Sentinel-1 based delineation were compared to those using optical imagery, which is traditionally used for this application. Therefore, the fire delineation that was derived was compared to that derived using three optical images: pre- and post-event Sentinel-2 images and a post-event SPOT 6 image. The first two were used to calculate the differences of the burnt area index (dBAI), used to derive the burnt area delineation by OBIA and photo interpretation with the help of the SPOT 6 image. Results of the comparison showed the feasibility of using the MTC technique for burnt area delineation, as high overall accuracy values were observed when compared to the burnt area delineation derived from optical imagery. The importance of the incidence angle of the Sentinel-1 images was assessed as well, with lower angles resulting in higher overall accuracies. In addition, the availability of double polarization of the Sentinel-1 images, allowed us to give recommendations regarding which polarization gave the best results. The potential for the use of SAR data, obtaining equivalent results to those obtained from optical imagery, is significant in an emergency context given that radar sensors acquire images continuosly and in all weather conditions.
This work's frame is the assurance of water availability/demand balance in the Community of Madrid linked to the urban growth. The paper presents the development of an operative methodology to define and create a cartographic database in the Community of Madrid, based on SPOT5 satellite imagery and its periodical updates with a further usage of QuickBird images for its validation. This database reflects the evolution of urbanized areas as a result of the Urban Development Plannings' consolidation in all municipalities, as well as the state and evolution of green urban areas. The resultant cartographic database has been integrated in the Canal de Isabel II's Geographical Information System and is intended to be the reference information for the development and updating of the strategic infrastructure plans that should anticipate to future water demands caused by urban expansion in the Community of Madrid.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.