The leaf is the organ that is crucial for photosynthesis and the production of nutrients in plants; as such, the number of leaves is one of the key indicators with which to describe the development and growth of a canopy. The irregular shape and distribution of the blades, as well as the effect of natural light, make the segmentation and detection process of the blades difficult. The inaccurate acquisition of plant phenotypic parameters may affect the subsequent judgment of crop growth status and crop yield. To address the challenge in counting dense and overlapped plant leaves under natural environments, we proposed an improved deep-learning-based object detection algorithm by merging a space-to-depth module, a Convolutional Block Attention Module (CBAM) and Atrous Spatial Pyramid Pooling (ASPP) into the network, and applying the smoothL1 function to improve the loss function of object prediction. We evaluated our method on images of five different plant species collected under indoor and outdoor environments. The experimental results demonstrated that our algorithm which counts dense leaves improved average detection accuracy of 85% to 96%. Our algorithm also showed better performance in both detection accuracy and time consumption compared to other state-of-the-art object detection algorithms.
The identification of cucumber fruit is an essential procedure in automated harvesting in greenhouses. In order to enhance the identification ability of object detection models for cucumber fruit harvesting, an extended RGB image dataset (n = 801) with 3943 positive and negative labels was constructed. Firstly, twelve channels in four color spaces (RGB, YCbCr, HIS, La*b*) were compared through the ReliefF method to choose the channel with the highest weight. Secondly, the RGB image dataset was converted to the pseudo-color dataset of the chosen channel (Cr channel) to pre-train the YOLOv5s model before formal training using the RGB image dataset. Based on this method, the YOLOv5s model was enhanced by the Cr channel. The experimental results show that the cucumber fruit recognition precision of the enhanced YOLOv5s model was increased from 83.7% to 85.19%. Compared with the original YOLOv5s model, the average values of AP, F1, recall rate, and mAP were increased by 8.03%, 7%, 8.7%, and 8%, respectively. In order to verify the applicability of the pre-training method, ablation experiments were conducted on SSD, Faster R-CNN, and four YOLOv5 versions (s, l, m, x), resulting in the accuracy increasing by 1.51%, 3.09%, 1.49%, 0.63%, 3.15%, and 2.43%, respectively. The results of this study indicate that the Cr channel pre-training method is promising in enhancing cucumber fruit detection in a near-color background.
Plant photosynthesis and biomass production are associated with the amount of intercepted light, especially the light distribution inside the canopy. Three virtual canopies (n = 80, 3.25 plants/m2) were constructed based on average leaf size of the digitized plant structures: ‘small leaf’ (98.1 cm2), ‘medium leaf’ (163.0 cm2) and ‘big leaf’ (241.6 cm2). The ratios of diffuse light were set in three gradients (27.8%, 48.7%, 89.6%). The simulations of light interception were conducted under different ratios of diffuse light, before and after the normalization of incident radiation. With 226.1% more diffuse light, the result of light interception could increase by 34.4%. However, the 56.8% of reduced radiation caused by the increased proportion of diffuse light inhibited the advantage of diffuse light in terms of a 26.8% reduction in light interception. The big-leaf canopy had more mutual shading effects, but its larger leaf area intercepted 56.2% more light than the small-leaf canopy under the same light conditions. The small-leaf canopy showed higher efficiency in light penetration and higher light interception per unit of leaf area. The study implied the 3D structural model, an effective tool for quantitative analysis of the interaction between light and plant canopy structure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.