Abstract:In the dwindling natural mangrove today, mangrove reforestation projects are conducted worldwide to prevent further losses. Due to monoculture and the low survival rate of artificial mangroves, it is necessary to pay attention to mapping and monitoring them dynamically. Remote sensing techniques have been widely used to map mangrove forests due to their capacity for large-scale, accurate, efficient, and repetitive monitoring. This study evaluated the capability of a 0.5-m Pléiades-1 in classifying artificial mangrove species using both pixel-based and object-based classification schemes. For comparison, three machine learning algorithms-decision tree (DT), support vector machine (SVM), and random forest (RF)-were used as the classifiers in the pixel-based and object-based classification procedure. The results showed that both the pixel-based and object-based approaches could recognize the major discriminations between the four major artificial mangrove species. However, the object-based method had a better overall accuracy than the pixel-based method on average. For pixel-based image analysis, SVM produced the highest overall accuracy (79.63%); for object-based image analysis, RF could achieve the highest overall accuracy (82.40%), and it was also the best machine learning algorithm for classifying artificial mangroves. The patches produced by object-based image analysis approaches presented a more generalized appearance and could contiguously depict mangrove species communities. When the same machine learning algorithms were compared by McNemar's test, a statistically significant difference in overall classification accuracy between the pixel-based and object-based classifications only existed in the RF algorithm. Regarding species, monoculture and dominant mangrove species Sonneratia apetala group 1 (SA1) as well as partly mixed and regular shape mangrove species Hibiscus tiliaceus (HT) could well be identified. However, for complex and easily-confused mangrove species Sonneratia apetala group 2 (SA2) and other occasionally presented mangroves species (OT), only major distributions could be extracted, with an accuracy of about two-thirds. This study demonstrated that more than 80% of artificial mangroves species distribution could be mapped.
Evaluating the spatial‐temporal dynamics in ecological risk and understanding its impact on water quality in reservoirs could optimize watershed land use and protect reservoir water quality. However, this impact remains elusive due to the lack of long‐term field data, the heterogeneity of land use, and scale effects. Therefore, Danjiangkou Reservoir area was selected as the study area, where rapid urban expansion and ecological conservation and restoration measures have significantly changed the ecological environment, altering the water quality. We investigated the spatial‐temporal changes of land use from 1990 to 2020 and evaluated how landscape ecological risk changed, as well as explored the impact of landscape ecological risk changes on water quality. The landscape ecological risk was calculated by landscape vulnerability and landscape disturbance (based on fragmentation, separation, and fractal dimension). The results indicated that the growth rate of water bodies surged (7.65 km2/a) and cropland experienced an apparent reduction (14.39%) over the past 30 years. These landscape changes decreased ecological risk, especially after the water transfer. The results revealed that the impacts of ecological risk on water quality were better explained at the riparian scale than at the reach and catchment scales. Specifically, the ecological risk was strongly relevant to dissolved oxygen, Turbidity, Cl− and NO3− ${{\text{NO}}_{3}}^{-}$, and moderately correlated to Ca2+ and pH. While it was not correlated with total nitrogen, total phosphorus, and F−, possibly due to the irregular reservoir operations, occasional excessive use of fertilizers, and rock weathering. This study preliminarily discloses the impact of landscape ecological risk on water quality in a basin reservoir and provides a theoretical basis to take measures in advance for the sustainable development of a reservoir basin.
Mapping plucking areas of tea plantations is essential for tea plantation management and production estimation. However, on-ground survey methods are time-consuming and labor-intensive, and satellite-based remotely sensed data are not fine enough for plucking area mapping that is 0.5–1.5 m in width. Unmanned aerial vehicles (UAV) remote sensing can provide an alternative. This paper explores the potential of using UAV-derived remotely sensed data for identifying plucking areas of tea plantations. In particular, four classification models were built based on different UAV data (optical imagery, digital aerial photogrammetry, and lidar data). The results indicated that the integration of optical imagery and lidar data produced the highest overall accuracy using the random forest algorithm (94.39%), while the digital aerial photogrammetry data could be an alternative to lidar point clouds with only a ~3% accuracy loss. The plucking area of tea plantations in the Huashan Tea Garden was accurately measured for the first time with a total area of 6.41 ha, which accounts for 57.47% of the tea garden land. The most important features required for tea plantation mapping were the canopy height, variances of heights, blue band, and red band. Furthermore, a cost–benefit analysis was conducted. The novelty of this study is that it is the first specific exploration of UAV remote sensing in mapping plucking areas of tea plantations, demonstrating it to be an accurate and cost-effective method, and hence represents an advance in remote sensing of tea plantations.
High-cost data collection and processing are challenges for UAV LiDAR (light detection and ranging) mounted on unmanned aerial vehicles in crop monitoring. Reducing the point density can lower data collection costs and increase efficiency but may lead to a loss in mapping accuracy. It is necessary to determine the appropriate point cloud density for tea plucking area identification to maximize the cost–benefits. This study evaluated the performance of different LiDAR and photogrammetric point density data when mapping the tea plucking area in the Huashan Tea Garden, Wuhan City, China. The object-based metrics derived from UAV point clouds were used to classify tea plantations with the extreme learning machine (ELM) and random forest (RF) algorithms. The results indicated that the performance of different LiDAR point density data, from 0.25 (1%) to 25.44 pts/m2 (100%), changed obviously (overall classification accuracies: 90.65–94.39% for RF and 89.78–93.44% for ELM). For photogrammetric data, the point density was found to have little effect on the classification accuracy, with 10% of the initial point density (2.46 pts/m2), a similar accuracy level was obtained (difference of approximately 1%). LiDAR point cloud density had a significant influence on the DTM accuracy, with the RMSE for DTMs ranging from 0.060 to 2.253 m, while the photogrammetric point cloud density had a limited effect on the DTM accuracy, with the RMSE ranging from 0.256 to 0.477 m due to the high proportion of ground points in the photogrammetric point clouds. Moreover, important features for identifying the tea plucking area were summarized for the first time using a recursive feature elimination method and a novel hierarchical clustering-correlation method. The resultant architecture diagram can indicate the specific role of each feature/group in identifying the tea plucking area and could be used in other studies to prepare candidate features. This study demonstrates that low UAV point density data, such as 2.55 pts/m2 (10%), as used in this study, might be suitable for conducting finer-scale tea plucking area mapping without compromising the accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.