Flood maps alone are not sufficient to determine and assess the risks to people, property, infrastructure, and services due to a flood event. Simply put, the risk is almost zero to minimum if the flooded region is "empty" (i.e., unpopulated, has not properties, no industry, no infrastructure, and no socio-economic activity). High spatial resolution Earth Observation (EO) data can contribute to the generation and updating of flood risk maps based on several aspects including population, economic development, and critical infrastructure, which can enhance a city's flood mitigation and preparedness planning. In this case study for the Don River watershed, Toronto, the flood risk is determined and flood risk index maps are generated by implementing a methodology for estimating risk based on the geographic coverage of the flood hazard, vulnerability of people, and the exposure of large building structures to flood water. Specifically, the spatial flood risk index maps have been generated through analytical spatial modeling which takes into account the areas in which a flood hazard is expected to occur, the terrain's morphological characteristics, socio-economic parameters based on demographic data, and the density of large building complexes. Generated flood risk maps are verified through visual inspection with 3D city flood maps. Findings illustrate that areas of higher flood risk coincide with areas of high flood hazard and social and building exposure vulnerability.
<p><strong>Abstract.</strong> Tree species classification at individual tree level is a challenging problem in forest management. Deep learning, a cutting-edge technology evolved from Artificial Intelligence, was seen to outperform other techniques when it comes to complex problems such as image classification. In this work, we present a novel method to classify forest tree species through high resolution RGB images acquired with a simple consumer grade camera mounted on a UAV platform using Residual Neural Networks. We used UAV RGB images acquired over three years that varied in numerous acquisition parameters such as season, time, illumination and angle to train the neural network. To begin with, we have experimented with limited data towards the identification of two pine species namely red pine and white pine from the rest of the species. We performed two experiments, first with the images from all three acquisition years and the second with images from only one acquisition year. In the first experiment, we obtained 80% classification accuracy when the trained network was tested on a distinct set of images and in the second experiment, we obtained 51% classification accuracy. As a part of this work, a novel dataset of high-resolution labelled tree species is generated that can be used to conduct further studies involving deep neural networks in forestry.</p>
Unmanned aerial vehicles (UAV) are being used for low altitude remote sensing for thematic land classification using visible light and multi-spectral sensors. The objective of this work was to investigate the use of UAV equipped with a compact spectrometer for land cover classification. The UAV platform used was a DJI Flamewheel F550 hexacopter equipped with GPS and Inertial Measurement Unit (IMU) navigation sensors, and a Raspberry Pi processor and camera module. The spectrometer used was the FLAME-NIR, a near-infrared spectrometer for hyperspectral measurements. RGB images and spectrometer data were captured simultaneously. As spectrometer data do not provide continuous terrain coverage, the locations of their ground elliptical footprints were determined from the bundle adjustment solution of the captured images. For each of the spectrometer ground ellipses, the land cover signature at the footprint location was determined to enable the characterization, identification, and classification of land cover elements. To attain a continuous land cover classification map, spatial interpolation was carried out from the irregularly distributed labeled spectrometer points. The accuracy of the classification was assessed using spatial intersection with the object-based image classification performed using the RGB images. Results show that in homogeneous land cover, like water, the accuracy of classification is 78% and in mixed classes, like grass, trees and manmade features, the average accuracy is 50%, thus, indicating the contribution of hyperspectral measurements of low altitude UAV-borne spectrometers to improve land cover classification.
Tree species identification at the individual tree level is crucial for forest operations and management, yet its automated mapping remains challenging. Emerging technology, such as the high-resolution imagery from unmanned aerial vehicles (UAV) that is now becoming part of every forester’s surveillance kit, can potentially provide a solution to better characterize the tree canopy. To address this need, we have developed an approach based on a deep Convolutional Neural Network (CNN) to classify forest tree species at the individual tree-level that uses high-resolution RGB images acquired from a consumer-grade camera mounted on a UAV platform. This work explores the ability of the Dense Convolutional Network (DenseNet) to classify commonly available economic coniferous tree species in eastern Canada. The network was trained using multi-temporal images captured under varying acquisition parameters to include seasonal, temporal, illumination and angular variability. Validation of this model using distinct images over a mixedwood forest in Ontario, Canada, showed over 84% classification accuracy in distinguishing five predominant species of coniferous trees. The model remains highly robust even when using images taken during different seasons and times, and with varying illumination and angles.
ABSTRACT:Small fixed wing and rotor-copter unmanned aerial vehicles (UAV) are being used for low altitude remote sensing for thematic land classification and precision agriculture applications. Various sensors operating in the non-visible spectrum such as multispectral, hyperspectral and thermal sensors can be used as payloads. This work presents a preliminary study on the use of unmanned aerial vehicle equipped with a compact spectrometer for land cover type characterization. When calibrated, the measured spectra by the UAV spectrometer can be processed and compared reference data to generate georeferenced reflection spectra enabling the identification, classification and characterization of land cover elements. For this case study we used a DJI Flamewheel F550 hexacopter and the FLAME-NIR spectrometer for hyperspectral measurements. The calibration of the spectrometer is described as well the approach to determine its spatial footprint. The spectrometer spectral exposure labeled ground point can be used to determine the land cover classification. Preliminary results of a case-study are presented.
In this paper, we describe a in-house are subsequently loaded into a Geographical Information developed software system for manmade object detection System (GIS). This enhances the value of the information as and the geo-localization based on sidescan sonar image. they can then be viewed and queried in conjunction with otherThe system contains the process of detecting objects in a useful information like bathymetry, currents and seabed sidescan sonar image, outlining the detected objects in a sediment types in an interactive map-based display over the mosaic image which is formed by mosaicing several Internet. The object information saved in the GIS can be swathes of sonar images, and saving the detected objects in served as a reference for new objects or environment change a geo-information data base. The automatic object detection in future survey. detection algorithm is based on the reflection strength (highlights), shadows and their properties. Information on II. SYSTEM STRUCTURE detected objects such as the estimated size, shape and their geo-locations are retrieved by the analysis. The localizedThe system consists of modules for automatic object objects and ancillary information are subsequently loaded detection, image mosaicing, and outlining of detected objects into a Geographical Information System (GIS). This on the mosaic image as shown in Fig. 1. Two types of enhances the value of the information as they can then be information are of interest in the CAD/CAC application; each viewed and queried in conjunction with other useful represents a distinct theme for the GIS: information like bathymetry, currents and seabed 1. Outlines and attributes of detected sonar targets; sediment types in an interactive map-based display over 2. Geo-referenced image obtained from a mosaic of several the Internet. sonar images.Target detection and extraction of attributes are handled I. INTRODUCTION within the standalone CAD/CAC toolbox. Targets are represented as geo-referenced polygons with associated The development of AUV technologies and the improved attributes to form a vector-based GIS theme. The attributes are sonar imaging technology have enhanced the use of sidescan stored in a relational database table with one record for each sonar (SSS) images in mine surveillance / detection efforts. target detected. The AUV for mine countermeasure has become a large The raw sonar image files have to undergo a process to research area, where computer aided detection and computer locate each sonar ping accurately in a geographical position, aided classification (CAD/CAC) of objects from SSS images taking into account the navigation and altitude parameters of play an important rule [1]-[4]. Such a system makes it possible the tow-fish used to acquire the sonar image. Several geoto quickly and automatically detect manmade objects from referenced sonar images, representing different contiguous SSS images, and is a key step towards the eventual capability survey swathes, can then be stitched together to form a of automatically ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.