NASA's Global Ecosystem Dynamics Investigation (GEDI) is a key climate mission whose goal is to advance our understanding of the role of forests in the global carbon cycle. While GEDI is the first space-based LIDAR explicitly optimized to measure vertical forest structure predictive of aboveground biomass, the accurate interpretation of this vast amount of waveform data across the broad range of observational and environmental conditions is challenging. Here, we present a novel supervised machine learning approach to interpret GEDI waveforms and regress canopy top height globally. We propose a probabilistic deep learning approach based on an ensemble of deep convolutional neural networks (CNN) to avoid the explicit modelling of unknown effects, such as atmospheric noise. The model learns to extract robust features that generalize to unseen geographical regions and, in addition, yields reliable estimates of predictive uncertainty. Ultimately, the global canopy top height estimates produced by our model have an expected RMSE of 2.7 m with low bias.
Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via photogrammetric surface reconstruction. The resulting maps have a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon (a root mean square error (RMSE) of 3.4 m and 5.6 m, respectively), and correctly estimate vegetation heights up to >50 m. They also show good qualitative agreement with existing vegetation height maps. Our work demonstrates that, given a moderate amount of reference data (i.e., 2000 km 2 in Gabon and ≈5800 km 2 in Switzerland), high-resolution vegetation height maps with 10 m ground sampling distance (GSD) can be derived at country scale from Sentinel-2 imagery.
Up-to-date catalogs of the urban tree population are of importance for municipalities to monitor and improve quality of life in cities. Despite much research on automation of tree mapping, mainly relying on on dedicated airborne LiDAR or hyperspectral campaigns, tree detection and species recognition is still mostly done manually in practice. We present a fully automated tree detection and species recognition pipeline that can process thousands of trees within a few hours using publicly available aerial and street view images of Google Maps TM . These data provide rich information from different viewpoints and at different scales from global tree shapes to bark textures. Our work-flow is built around a supervised classification that automatically learns the most discriminative features from thousands of trees and corresponding, publicly available tree inventory data. In addition, we introduce a change tracker that recognizes changes of individual trees at city-scale, which is essential to keep an urban tree inventory up-to-date. The system takes street-level images of the same tree location at two different times and classifies the type of change (e.g., tree has been removed). Drawing on recent advances in computer vision and machine learning, we apply convolutional neural networks (CNN) for all classification tasks. We propose the following pipeline: download all available panoramas and overhead images of an area of interest, detect trees per image and combine multi-view detections in a probabilistic framework, adding prior knowledge; recognize fine-grained species of detected trees. In a later, separate module, track trees over time, detect significant changes and classify the type of change. We believe this is the first work to exploit publicly available image data for city-scale street tree detection, species recognition and change tracking, exhaustively over several square kilometers, respectively many thousands of trees. Experiments in the city of Pasadena, California, USA show that we can detect > 70% of the street trees, assign correct species to > 80% for 40 different species, and correctly detect and classify changes in > 90% of the cases.
In this paper, we propose to estimate tree defoliation from ground-level RGB photos with convolutional neural networks (CNN). Tree defoliation is usually assessed with field campaigns, where experts estimate multiple tree health indicators per sample site. Campaigns span entire countries to come up with a holistic, nation-wide picture of forest health. Surveys are very laborious, expensive, time-consuming and need a large number of experts. We aim at making the monitoring process more efficient by casting tree defoliation estimation as an image interpretation problem. What makes this task challenging is strong variance in lighting, viewpoint, scale, tree species, and defoliation types. Instead of accounting for each factor separately through explicit modelling, we learn a joint distribution directly from a large set of annotated training images following the end-to-end learning paradigm of deep learning. The proposed workflow works as follows: (i) Human experts visit individual trees in forests distributed all over Switzerland, (ii) acquire one photo per tree with an off-theshelf, hand-held RGB camera and (iii) assign a defoliation value. The CNN approach is (iv) trained on a subset of the images with expert defoliation assessments and (v) tested on a hold-out part to check predicted values against ground truth. We evaluate our supervised method on three data sets with different level of difficulty acquired in Swiss forests and achieve an average mean absolute error (avgMAE) of 7.6% for the joint data set after crossvalidation. Comparison to a group of human experts on one of the data sets shows that our CNN approach performs only 0.9 percent points worse. We show that tree defoliation estimation from ground-level RGB images with a CNN works well and achieves performance close to human experts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.