Interest in drone solutions in forestry applications is growing. Using drones, datasets can be captured flexibly and at high spatial and temporal resolutions when needed. In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, biomass estimation, etc. Deep neural networks (DNN) have shown superior results when comparing with conventional machine learning methods such as multi-layer perceptron (MLP) in cases of huge input data. The objective of this research is to investigate 3D convolutional neural networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records. To find the most efficient set of feature combination, we compare the performances of 3D-CNN models trained with hyperspectral (HS) channels, Red-Green-Blue (RGB) channels, and canopy height model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively. The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. Our results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. Our results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.
Interest in drone solutions in forestry applications is growing. Using drones, datasets can be captured flexibly and at high spatial and temporal resolutions when needed. In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, bio-mass estimation, etc. Deep Neural Networks (DNN) have shown superior results when comparing with conventional machine learning methods such as Multi-Layer Perceptron (MLP) in cases of huge input data. The objective of this research was to investigate 3D convolutional neural networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records. To find the most efficient set of feature combination, we compare the performances of 3D-CNN models trained with hyperspectral (HS) channels, RGB channels, and canopy height model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively. The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. Our results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. Our results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.
ABSTRACT:Light-weight 2D format hyperspectral imagers operable from unmanned aerial vehicles (UAV) have become common in various remote sensing tasks in recent years. Using these technologies, the area of interest is covered by multiple overlapping hypercubes, in other words multiview hyperspectral photogrammetric imagery, and each object point appears in many, even tens of individual hypercubes. The common practice is to calculate hyperspectral orthomosaics utilizing only the most nadir areas of the images. However, the redundancy of the data gives potential for much more versatile and thorough feature extraction. We investigated various options of extracting spectral features in the grass sward quantity evaluation task. In addition to the various sets of spectral features, we used photogrammetry-based ultra-high density point clouds to extract features describing the canopy 3D structure. Machine learning technique based on the Random Forest algorithm was used to estimate the fresh biomass. Results showed high accuracies for all investigated features sets. The estimation results using multiview data provided approximately 10% better results than the most nadir orthophotos. The utilization of the photogrammetric 3D features improved estimation accuracy by approximately 40% compared to approaches where only spectral features were applied. The best estimation RMSE of 239 kg/ha (6.0%) was obtained with multiview anisotropy corrected data set and the 3D features.
<p><strong>Abstract.</strong> The information on the grass quantity and quality is needed for several times in a growing season for making optimal decisions about the harvesting time and the fertiliser rate, especially in northern countries, where grass swards quality declines and yield increases rapidly in the primary growth. We studied the potential of UAV-based photogrammetry and spectral imaging in grass quality and quantity estimation. To study this, a trial site with large variation in the quantity and quality parameters was established by using different nitrogen fertilizer application rates and harvesting dates. UAV-based remote sensing datasets were captured four times during the primary growth season in June 2017 and agricultural reference measurements including dry biomass and quality parameters, such as the digestibility (D-value) were collected simultaneously. The datasets were captured using a flying height of 50&thinsp;m which provided a GSD of 0.7&thinsp;cm for the photogrammetric imagery and 5&thinsp;cm for the hyperspectral imagery. A rigorous photogrammetric workflow was carried out for all data sets aiming to determine the image exterior orientation parameters, camera interior orientation parameters, 3D point clouds and orthomosaics. The quantitative radiometric calibration included sensor corrections, atmospheric correction, and correction for the radiometric non-uniformities caused by illumination variations, BRDF correction and the absolute reflectance transformation. Random forest (RF) and multilinear regression (MLR) estimators were trained using spectral bands, vegetation indices and 3D features, extracted from the remote sensing datasets, and insitu reference measurements. From the FPI hyperspectral data, the 35 spectral bands and 11 spectral indices were used. The 3D features were extracted from the canopy height model (CHM) generated using RGB data. The most accurate results were obtained in the second measurement day (15th June) which was near to the optimal harvesting time and generally RF outperformed MLR slightly. When assessed with the leave-one-out-estimation, the best root mean squared error (RMSE%) were 8.9% for the dry biomass using 3D features. The best D-value estimation using RF algorithm (RMSE%&thinsp;=&thinsp;0.87%) was obtained using spectral features. Using the estimators, we then calculated grass quality and quantity maps covering the entire test site to compare different techniques and to evaluate the variability in the field. The results showed that the low-cost drone remote sensing gave excellent precision both for biomass and quality parameter estimation if accurately calibrated, offering an excellent tool for efficient and accurate management of silage grass production.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.