Timely and accurate information on the spatial distribution of urban trees is critical for sustainable urban development, management and planning. Compared with satellite-based remote sensing, Unmanned Aerial Vehicle (UAV) remote sensing has a higher spatial and temporal resolution, which provides a new method for the accurate identification of urban trees. In this study, we aim to establish an efficient and practical method for urban tree identification by combining an object-oriented approach and a random forest algorithm using UAV multispectral images. Firstly, the image was segmented by a multi-scale segmentation algorithm based on the scale determined by the Estimation of Scale Parameter 2 (ESP2) tool and visual discrimination. Secondly, spectral features, index features, texture features and geometric features were combined to form schemes S1–S8, and S9, consisting of features selected by the recursive feature elimination (RFE) method. Finally, the classification of urban trees was performed based on the nine schemes using the random forest (RF), support vector machine (SVM) and k-nearest neighbor (KNN) classifiers, respectively. The results show that the RF classifier performs better than SVM and KNN, and the RF achieves the highest accuracy in S9, with an overall accuracy (OA) of 91.89% and a Kappa coefficient (Kappa) of 0.91. This study reveals that geometric features have a negative impact on classification, and the other three types have a positive impact. The feature importance ranking map shows that spectral features are the most important type of features, followed by index features, texture features and geometric features. Most tree species have a high classification accuracy, but the accuracy of Camphor and Cinnamomum Japonicum is much lower than that of other tree species, suggesting that the features selected in this study cannot accurately distinguish these two tree species, so it is necessary to add features such as height in the future to improve the accuracy. This study illustrates that the combination of an object-oriented approach and the RF classifier based on UAV multispectral images provides an efficient and powerful method for urban tree classification.
Olive trees, which are planted widely in China, are economically significant. Timely and accurate acquisition of olive tree crown information is vital in monitoring olive tree growth and accurately predicting its fruit yield. The advent of unmanned aerial vehicles (UAVs) and deep learning (DL) provides an opportunity for rapid monitoring parameters of the olive tree crown. In this study, we propose a method of automatically extracting olive crown information (crown number and area of olive tree), combining visible-light images captured by consumer UAV and a new deep learning model, U2-Net, with a deeply nested structure. Firstly, a data set of an olive tree crown (OTC) images was constructed, which was further processed by the ESRGAN model to enhance the image resolution and was augmented (geometric transformation and spectral transformation) to enlarge the data set to increase the generalization ability of the model. Secondly, four typical subareas (A–D) in the study area were selected to evaluate the performance of the U2-Net model in olive crown extraction in different scenarios, and the U2-Net model was compared with three current mainstream deep learning models (i.e., HRNet, U-Net, and DeepLabv3+) in remote sensing image segmentation effect. The results showed that the U2-Net model achieved high accuracy in the extraction of tree crown numbers in the four subareas with a mean of intersection over union (IoU), overall accuracy (OA), and F1-Score of 92.27%, 95.19%, and 95.95%, respectively. Compared with the other three models, the IoU, OA, and F1-Score of the U2-Net model increased by 14.03–23.97 percentage points, 7.57–12.85 percentage points, and 8.15–14.78 percentage points, respectively. In addition, the U2-Net model had a high consistency between the predicted and measured area of the olive crown, and compared with the other three deep learning models, it had a lower error rate with a root mean squared error (RMSE) of 4.78, magnitude of relative error (MRE) of 14.27%, and a coefficient of determination (R2) higher than 0.93 in all four subareas, suggesting that the U2-Net model extracted the best crown profile integrity and was most consistent with the actual situation. This study indicates that the method combining UVA RGB images with the U2-Net model can provide a highly accurate and robust extraction result for olive tree crowns and is helpful in the dynamic monitoring and management of orchard trees.
Accurate and timely information on the number of densely-planted Chinese fir seedlings is essential for their scientific cultivation and intelligent management. However, in the later stage of cultivation, the overlapping of lateral branches among individuals is too severe to identify the entire individual in the UAV image. At the same time, in the high-density planting nursery, the terminal bud of each seedling has a distinctive characteristic of growing upward, which can be used as an identification feature. Still, due to the small size and dense distribution of the terminal buds, the existing recognition algorithm will have a significant error. Therefore, in this study, we proposed a model based on the improved network structure of the latest YOLOv5 algorithm for identifying the terminal bud of Chinese fir seedlings. Firstly, the micro-scale prediction head was added to the original prediction head to enhance the model’s ability to perceive small-sized terminal buds. Secondly, a multi-attention mechanism module composed of Convolutional Block Attention Module (CBAM) and Efficient Channel Attention (ECA) was integrated into the neck of the network to enhance further the model’s ability to focus on key target objects in complex backgrounds. Finally, the methods including data augmentation, Test Time Augmentation (TTA) and Weighted Boxes Fusion (WBF) were used to improve the robustness and generalization of the model for the identification of terminal buds in different growth states. The results showed that, compared with the standard version of YOLOv5, the recognition accuracy of the improved YOLOv5 was significantly increased, with a precision of 95.55%, a recall of 95.84%, an F1-Score of 96.54%, and an mAP of 94.63%. Under the same experimental conditions, compared with other current mainstream algorithms (YOLOv3, Faster R-CNN, and PP-YOLO), the average precision and F1-Score of the improved YOLOv5 also increased by 9.51-28.19 percentage points and 15.92-32.94 percentage points, respectively. Overall, The improved YOLOv5 algorithm integrated with the attention network can accurately identify the terminal buds of densely-planted Chinese fir seedlings in UAV images and provide technical support for large-scale and automated counting and precision cultivation of Chinese fir seedlings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.