Abstract:Pod phenotypic traits are closely related to grain yield and quality. Pod phenotype detection in soybean populations in natural environments is important to soybean breeding, cultivation, and field management. For an accurate pod phenotype description, a dynamic detection method is proposed based on an improved YOLO-v5 network. First, two varieties were taken as research objects. A self-developed field soybean three-dimensional color image acquisition vehicle was used to obtain RGB and depth images of soybean … Show more
“…mAP is used to evaluate the performance of object detection models, particularly in multi-class object detection, measuring the model's accuracy across different categories while considering class imbalances. The formulas defining these metrics are presented in equations ( 9)- (11), where P denotes precision…”
The rapid and accurate identification of sugarcane internodes is of great significance for tasks such as field operations and precision management in the sugarcane industry, and it is also a fundamental task for the intelligence of the sugarcane industry. However, in complex field environments, traditional image processing techniques have low accuracy, efficiency, and are mainly limited to server-side processing. Meanwhile, the sugarcane industry requires a large amount of manual involvement, leading to high labor costs. In response to the aforementioned issues, this paper employed YOLOv5s as the original model algorithm, incorporated the K-means clustering algorithm, and added the CBAM attention module and VarifocalNet mechanism to the algorithm. The improved model is referred to as YOLOv5s-KCV. We implemented the YOLOv5s-KCV algorithm on Jetson TX2 edge computing devices with a well-configured runtime environment, completing the design and development of a real-time sugarcane internode identification system. Through ablation experiments, comparative experiments of various mainstream visual recognition network models, and performance experiments conducted in the field, the effectiveness of the proposed improvement method and the developed real-time sugarcane internode identification system were verified. The experimental results demonstrate that the improvement method of YOLOv5s-KCV is effective, with an algorithm recognition accuracy of 89.89%, a recall rate of 89.95%, and an mAP value of 92.16%, which respectively increased by 6.66%, 5.92%, and 7.44% compared to YOLOv5s. The system underwent performance testing in various weather conditions and at different times in the field, achieving a minimum recognition accuracy of sugarcane internodes of 93.5%. Therefore, the developed system in this paper can achieve real-time and accurate identification of sugarcane internodes in field environments, providing new insights for related work in sugarcane field industries.
“…mAP is used to evaluate the performance of object detection models, particularly in multi-class object detection, measuring the model's accuracy across different categories while considering class imbalances. The formulas defining these metrics are presented in equations ( 9)- (11), where P denotes precision…”
The rapid and accurate identification of sugarcane internodes is of great significance for tasks such as field operations and precision management in the sugarcane industry, and it is also a fundamental task for the intelligence of the sugarcane industry. However, in complex field environments, traditional image processing techniques have low accuracy, efficiency, and are mainly limited to server-side processing. Meanwhile, the sugarcane industry requires a large amount of manual involvement, leading to high labor costs. In response to the aforementioned issues, this paper employed YOLOv5s as the original model algorithm, incorporated the K-means clustering algorithm, and added the CBAM attention module and VarifocalNet mechanism to the algorithm. The improved model is referred to as YOLOv5s-KCV. We implemented the YOLOv5s-KCV algorithm on Jetson TX2 edge computing devices with a well-configured runtime environment, completing the design and development of a real-time sugarcane internode identification system. Through ablation experiments, comparative experiments of various mainstream visual recognition network models, and performance experiments conducted in the field, the effectiveness of the proposed improvement method and the developed real-time sugarcane internode identification system were verified. The experimental results demonstrate that the improvement method of YOLOv5s-KCV is effective, with an algorithm recognition accuracy of 89.89%, a recall rate of 89.95%, and an mAP value of 92.16%, which respectively increased by 6.66%, 5.92%, and 7.44% compared to YOLOv5s. The system underwent performance testing in various weather conditions and at different times in the field, achieving a minimum recognition accuracy of sugarcane internodes of 93.5%. Therefore, the developed system in this paper can achieve real-time and accurate identification of sugarcane internodes in field environments, providing new insights for related work in sugarcane field industries.
“…ASPP) module to improve the detection of weak and weak pod targets [23]. The p of the improved YOLOv5 model increased by about 6%, and the precision of POD in the 200 soybeans population reached 88.14%.…”
Section: Data Acquisitionmentioning
confidence: 99%
“…It was found that the F1 score of the improved Yolov4-Tiny-tea model was 12.11, 11.66 and 6.76 percentage points higher than that of the YOLOv3, YOLOv4 and YOLOv5l network models, respectively [22]. Fu et al introduced the channel attention-asymmetric spatial pyramid pool (CA-ASPP) module to improve the detection of weak and weak pod targets [23]. The precision of the improved YOLOv5 model increased by about 6%, and the precision of POD number in the 200 soybeans population reached 88.14%.…”
Aiming at the problems of dense distribution, similar color and easy occlusion of famous and excellent tea tender leaves, an improved YOLOv7 (you only look once v7) model based on attention mechanism was proposed in this paper. The attention mechanism modules were added to the front and back positions of the enhanced feature extraction network (FPN), and the detection effects of YOLOv7+SE network, YOLOv7+ECA network, YOLOv7+CBAM network and YOLOv7+CA network were compared. It was found that the YOLOv7+CBAM Block model had the highest recognition accuracy with an accuracy of 93.71% and a recall rate of 89.23%. It was found that the model had the advantages of high accuracy and missing rate in small target detection, multi-target detection, occluded target detection and densely distributed target detection. Moreover, the model had good real-time performance and had a good application prospect in intelligent management and automatic harvesting of famous and excellent tea.
“…If automatic picking of waxberry is to be realised, the critical tasks are waxberry target detection and spatial 3D localisation. In other words, a good detection algorithm equipped with a RGB-D or depth camera can complete the spatial localisation of waxberry fruits [4,5]. Therefore, this paper focuses on the target detection algorithm and suggests relevant improvement suggestions to lay the foundation for subsequent waxberry localisation.…”
In order to solve the safety and efficiency problems in the picking process of Waxberry, the slow speed and low precision of high‐density Waxberry target detection under a complex background were studied. A lightweight Waxberry target detection algorithm based on YOLOv5 is studied. In this study, C3‐Faster1 and C3‐Faster2 modules are proposed, which are located in the backbone and neck of the network: C3‐Faster1 can improve the model speed with a simple structure; C3‐Faster2 integrates the context attention mechanism and transform module based on C3‐Faster1 to make the network pay attention to the information of Waxberry image context and expand the channel receptive field. A new pyramid module, SPPFCSPC, is proposed to expand the sensing field and improve the accuracy of boundary detection. It also combines the Coordinate Attention (CA) and Dyhead dynamic detection head to suppress useless information and enhance the detection ability of small targets. Compared to YOLOv4, YOLOv7, and YOLOv8, mean accuracy percentage (mAP) improved by 5.7%, 9.4%, 8.3%. Compared to the base YOLOv5 model, mAP has improved from 86.5% to 91.9%, running on 2 GB Jeston nano, and the improved model is 5.03 frames per second (FPS) faster than YOLOv5. Experiments show that the designed module is more effective in Waxberry detection tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.