Mapping forest types and tree species at regional scales to provide information for ecologists and forest managers is a new challenge for the remote sensing community. Here, we assess the potential of a U‐net convolutional network, a recent deep learning algorithm, to identify and segment (1) natural forests and eucalyptus plantations, and (2) an indicator of forest disturbance, the tree species Cecropia hololeuca, in very high resolution images (0.3 m) from the WorldView‐3 satellite in the Brazilian Atlantic rainforest region. The networks for forest types and Cecropia trees were trained with 7611 and 1568 red‐green‐blue (RGB) images, respectively, and their dense labeled masks. Eighty per cent of the images were used for training and 20% for validation. The U‐net network segmented forest types with an overall accuracy >95% and an intersection over union (IoU) of 0.96. For C. hololeuca, the overall accuracy was 97% and the IoU was 0.86. The predictions were produced over a 1600 km2 region using WorldView‐3 RGB bands pan‐sharpened at 0.3 m. Natural and eucalyptus forests compose 79 and 21% of the region's total forest cover (82 250 ha). Cecropia crowns covered 1% of the natural forest canopy. An index to describe the level of disturbance of the natural forest fragments based on the spatial distribution of Cecropia trees was developed. Our work demonstrates how a deep learning algorithm can support applications such as vegetation, tree species distributions and disturbance mapping on a regional scale.
The present work is the first of a two‐part weather study of the ionospheric Total Electron Content (TEC), based on data collected by four ground‐based Global Navigation Satellite System networks that cover the whole Latin America from the Patagonia to the north of Mexico. From the best of our knowledge, the maps presented here are the first TEC maps obtained using ground‐based data that covers the entire Latin America region, which represent an advance to the space weather monitoring and forecasting of the ionosphere. This work provides a qualitative and quantitative daytime analysis of the ionospheric TEC variation, which encompasses: (a) the response of TEC to the solar flux at midday; (b) the seasonal variation of TEC in different latitudinal ranges; and (c) the North‐South asymmetry of TEC over Latin America. The response to the solar flux is based on day‐to‐day TEC variations during two periods of different solar activity conditions: 2011 (ascending phase) and 2014 (maximum). The approximations of meridional wind component derived from Horizontal Wind Model‐14 model and hmF2 obtained from International Reference Ionosphere model were used. Equinoctial asymmetries with an opposite configuration in high and moderate solar activity were identified in the TEC variation. For 2011, it was related to the solar flux change. However, in 2014, according to the hmF2 variation, the influence of neutral wind becomes dominant. Among the results, we highlight an absence of winter anomaly in the Northern Hemisphere in 2014 and a stronger annual anomaly for latitudes under −20∘.
Urban environments are regions in which spectral variability and spatial variability are extremely high, with a huge range of shapes and sizes, and they also demand high resolution images for applications involving their study. Due to the fact that these environments can grow even more over time, applications related to their monitoring tend to turn to autonomous intelligent systems, which together with remote sensing data could help or even predict daily life situations. The task of mapping cities by autonomous operators was usually carried out by aerial optical images due to its scale and resolution; however new scientific questions have arisen, and this has led research into a new era of highly-detailed data extraction. For many years, using artificial neural models to solve complex problems such as automatic image classification was commonplace, owing much of their popularity to their ability to adapt to complex situations without needing human intervention. In spite of that, their popularity declined in the mid-2000s, mostly due to the complex and time-consuming nature of their methods and workflows. However, newer neural network architectures have brought back the interest in their application for autonomous classifiers, especially for image classification purposes. Convolutional Neural Networks (CNN) have been a trend for pixel-wise image segmentation, showing flexibility when detecting and classifying any kind of object, even in situations where humans failed to perceive differences, such as in city scenarios. In this paper, we aim to explore and experiment with state-of-the-art technologies to semantically label 3D urban models over complex scenarios. To achieve these goals, we split the problem into two main processing lines: first, how to correctly label the façade features in the 2D domain, where a supervised CNN is used to segment ground-based façade images into six feature classes, roof, window, wall, door, balcony and shop; second, a Structure-from-Motion (SfM) and Multi-View-Stereo (MVS) workflow is used to extract the geometry of the façade, wherein the segmented images in the previous stage are then used to label the generated mesh by a “reverse” ray-tracing technique. This paper demonstrates that the proposed methodology is robust in complex scenarios. The façade feature inferences have reached up to 93% accuracy over most of the datasets used. Although it still presents some deficiencies in unknown architectural styles and needs some improvements to be made regarding 3D-labeling, we present a consistent and simple methodology to handle the problem.
Classifiers that make use of pixel-by-pixel approaches are limited in the high spatial and radiometric resolution of urban areas, that happens mostly because of the similarity between the target's spectral response like ceramic roofs and bare soil. Because of that, the literature favors approaches that make use of object-oriented analysis for image interpretation, those approaches make a better use of the high spatial resolution and do not use only the target spectral response. Assuming that the object-oriented analysis is a favorable approach to be employed for intra-urban image classification, this paper will assess the results of such approach through an implementation of it in an urbanized area from the city of Campinas (Brazil), which has a size close to twelve square kilometers. Making use of the fusion of high spatial resolution image from Worldview-2 sensor and it's panchromatic band, the experiments were performed with the use of eCognition Developer 8 as the segmentation platform, and the classification being based on a decision tree generated by J48 (C4.5) algorithm on the software WEKA. This work also assess which approach best suits the experiment needs, being an optimal attribute selection achieved through a Wrapper filter, with a final kappa statistic of 0.9425.
ABSTRACT:Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar) imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR) airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM), followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes). The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.
Resumo-A grande vantagem na utilização de imagens de radar (Radio Detection and Ranging)é a possibilidade de levantamento emáreas frequentemente recobertas por nuvens, uma vez que o imageamento por sensores ativos independe das condições atmosféricas presentes na região de interesse. Muitas vezes, o mapeamento a partir dessas imagensé realizado manualmente, exigindo tempo e esforço consideráveis por parte do intérprete. O presente artigo aborda a utilização dos Mapas Auto-Organizáveis (Self-Organizing Maps-SOM) como método de identificação de pontos em imagens. Cada ponto identificado representa um elemento pertencente a uma estrada. Com o propósito de qualificar os resultados, estes foram submetidos a uma medida de desempenho específica para a extração de estradas, apresentando como resultado, seusíndices de correção, perfeição e qualidade, sendo oúltimo essencial no desempenho de determinados extratores de estradas em imagens digitais. Palavras-chaveimagens de radar, visão computacional, mapas auto-organizáveis e extração de estradas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.