Leafy vegetables are an essential source of the various nutrients that people need in their daily lives. The quantification of vegetable phenotypes and yield estimation are prerequisites for the selection of genetic varieties and for the improvement of planting methods. The traditional method is manual measurement, which is time-consuming and cumbersome. Therefore, there is a need for efficient and convenient in situ vegetable phenotype identification methods to provide data support for breeding research and for crop yield monitoring, thereby increasing vegetable yield. In this paper, a novel approach was developed for the in-situ determination of the three-dimensional (3D) phenotype of vegetables by recording video clips using smartphones. First, a smartphone was used to record the vegetable from different angles, and then the key frame containing the crop area in the video was obtained using an algorithm based on the vegetation index and scale-invariant feature transform algorithm (SIFT) matching. After obtaining the key frame, a dense point cloud of the vegetables was reconstructed using the Structure from Motion (SfM) method, and then the segmented point cloud and a point cloud skeleton were obtained using the clustering algorithm. Finally, the plant height, leaf number, leaf length, leaf angle, and other phenotypic parameters were obtained through the point cloud and point cloud skeleton. Comparing the obtained phenotypic parameters to the manual measurement results, the root-mean-square error (RMSE) of the plant height, leaf number, leaf length, and leaf angle were 1.82, 1.57, 2.43, and 4.7, respectively. The measurement accuracy of each indicators is greater than 80%. The results show that the proposed method provides a convenient, fast, and low-cost 3D phenotype measurement pipeline. Compared to other methods based on photogrammetry, this method does not need a labor-intensive image-capturing process and can reconstruct a high-quality point cloud model by directly recording videos of crops.
The detection of architecture was one of the essential questions of plant root phenotyping research. The classical root architecture detection method was carried out by manual measurement. It is not only tedious, but also has a poor reliability, and the roots are damaged easily. This paper described a three-dimensional architecture measurement method based on XCT and centerline extraction method. The method includes the following steps: (1) obtaining the root CT images through the XCT system; (2) obtaining a root three-dimensional model after image segmentation and reconstruction. To solve the problem of model fracture, the quality of the reconstruction model was improved by a series of pre-processing methods; (3) extracting the root's centerline based on the mesh contraction, and the high-quality centerline was obtained after the post-processing methods; (4) calculating the architecture parameters. Different root samples were tested to validate the method for centerline extraction, and the root architecture was calculated by the centerline. The results were compared with the manual measurements, and the mean absolute percentage error of root length and root angle were 1.74% and 4.51, respectively. The entire algorithm runs for less than 30 seconds. The study may provide an effective method for root architecture detection.
To obtain higher economic benefits, large eel breeding companies classify live eels by weight. Due to their strong mobility and smooth body surface, living eels are not suitable for traditional mechanical weight measurement. In this study, a live eel sorting machine based on machine vision was developed, and a novel method was developed for obtaining live eel weight measurements through images. First, a backlit workbench was designed to capture static images of eels, and then the projection area and skeleton length of the images were obtained by image preprocessing. For the eel's body shape, which is generally cylindrical and gradually transitions to a flat tail, the tail posture changes affect the shape of the images; thus, a weight measurement model combining the projected area and the skeleton length was proposed. The optimal scale division coefficient of the weight model was found to be 0.745 by experimentation. Then, select eels of different weight ranges were used for model error verification and to obtain the correction function of the error. The weight gradient was used to confirm the corrected eel weight model. Finally, the system calculation results were compared with the actual measurement results. The root mean square error (RMSE) was 12.94 g, and the mean absolute percentage error (MAPE) was 2.12%. The results show that the proposed method provided a convenient, fast, and low-cost non-contact weight measurement method for live eels, reduced the damage rate of live eels, and can meet the technical requirements of actual production.
Deep learning techniques have made great progress in the field of target detection in recent years, making it possible to accurately identify plants in complex environments in agricultural fields. This project combines deep learning algorithms with spraying technology to design a machine vision precision real-time targeting spraying system for field scenarios. Firstly, the overall structure scheme of the system consisting of image acquisition and recognition module, electronically controlled spray module and pressure-stabilized pesticide supply module was proposed. After that, based on the target detection model YOLOv5s, the model is lightened and improved by replacing the backbone network and adding an attention mechanism. Based on this, a grille decision control algorithm for solenoid valve group on-off was designed, while common malignant weeds were selected as objects to produce data sets and complete model training. Finally, the deployment of the hardware system and detection model on the electric spray bar sprayer was completed, and field trials were conducted at different speeds. The experimental results show that the improved algorithm reduces the model size to 53.57% of the original model with less impact on mAP accuracy, improves FPS by 18.16%. The accuracy of on-target spraying at 2km/h, 3km/h and 4km/h speeds were 90.80%, 86.20% and 79.61%, respectively, and the spraying hit rate decreased as the operating speed increased. Among the hit rate components, the effective recognition rate was significantly affected by speed, while the relative recognition hit rate was less affected.
As an effective heuristic method, three-way decision theory gives a new semantic interpretation to the three fields of the rough set, which has a huge application space. To classify the information of agricultural products more accurately under certain thresholds, this paper first makes a comprehensive evaluation of the decision, particularly the influence of the attributes of the event itself on the results and their interactions. By using fuzzy sets corresponding to membership and non-membership degree, this paper analyzes and puts forward two cases of proportional correlation coefficients in the transformation of a delayed decision domain, and selects the corresponding coefficients to compare the results directly. Finally, consumers can conveniently grasp product attribute information to make decisions. On this basis, this paper analyzed the standard data to verify the accuracy of the model. After that, the proposed algorithm, based on three decision-making agricultural product information classification processing, is applied to the relevant data of agricultural products. The experimental results showed that the algorithm can obtain more accurate results through a more straightforward calculation process. It can be concluded that the algorithm proposed in this paper can enable people to make more convenient and accurate decisions based on product attribute information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.