Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor is able to replace an expensive laser scanner in many plant phenotyping scenarios.
BackgroundLaserscanning recently has become a powerful and common method for plant parameterization and plant growth observation on nearly every scale range. However, 3D measurements with high accuracy, spatial resolution and speed result in a multitude of points that require processing and analysis. The primary objective of this research has been to establish a reliable and fast technique for high throughput phenotyping using differentiation, segmentation and classification of single plants by a fully automated system. In this report, we introduce a technique for automated classification of point clouds of plants and present the applicability for plant parameterization.ResultsA surface feature histogram based approach from the field of robotics was adapted to close-up laserscans of plants. Local geometric point features describe class characteristics, which were used to distinguish among different plant organs. This approach has been proven and tested on several plant species. Grapevine stems and leaves were classified with an accuracy of up to 98%. The proposed method was successfully transferred to 3D-laserscans of wheat plants for yield estimation. Wheat ears were separated with an accuracy of 96% from other plant organs. Subsequently, the ear volume was calculated and correlated to the ear weight, the kernel weights and the number of kernels. Furthermore the impact of the data resolution was evaluated considering point to point distances between 0.3 and 4.0 mm with respect to the classification accuracy.ConclusionWe introduced an approach using surface feature histograms for automated plant organ parameterization. Highly reliable classification results of about 96% for the separation of grapevine and wheat organs have been obtained. This approach was found to be independent of the point to point distance and applicable to multiple plant species. Its reliability, flexibility and its high order of automation make this method well suited for the demands of high throughput phenotyping.Highlights• Automatic classification of plant organs using geometrical surface information• Transfer of analysis methods for low resolution point clouds to close-up laser measurements of plants• Analysis of 3D-data requirements for automated plant organ classification
Accessing a plant's 3D geometry has become of significant importance for phenotyping during the last few years. Close-up laser scanning is an established method to acquire 3D plant shapes in real time with high detail, but it is stationary and has high investment costs. 3D reconstruction from images using structure from motion (SfM) and multi-view stereo (MVS) is a flexible cost-effective method, but requires post-processing procedures. The aim of this study is to evaluate the potential measuring accuracy of an SfM- and MVS-based photogrammetric method for the task of organ-level plant phenotyping. For this, reference data are provided by a high-accuracy close-up laser scanner. Using both methods, point clouds of several tomato plants were reconstructed at six following days. The parameters leaf area, main stem height and convex hull of the complete plant were extracted from the 3D point clouds and compared to the reference data regarding accuracy and correlation. These parameters were chosen regarding the demands of current phenotyping scenarios. The study shows that the photogrammetric approach is highly suitable for the presented monitoring scenario, yielding high correlations to the reference measurements. This cost-effective 3D reconstruction method depicts an alternative to an expensive laser scanner in the studied scenarios with potential for automated procedures.
Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale.
Vegetation is an important factor influencing solifluction processes, while at the same time, solifluction processes and landforms influence species composition, fine‐scale distribution and corresponding ecosystem functioning. However, how feedbacks between plants and solifluction processes influence the development of turf‐banked solifluction lobes (TBLs) and their geomorphic and vegetation patterns is still poorly understood. We addressed this knowledge gap in a detailed biogeomorphic investigation in the Turtmann glacier foreland (Switzerland). Methods employed include geomorphic and vegetation mapping, terrain assessment with unmanned aerial vehicle (UAV) and temperature logging. Results were subsequently integrated with knowledge from previous geomorphic and ecologic studies into a conceptual model. Our results show that geomorphic and vegetation patterns at TBLs are closely linked through the lobe elements tread, risers and ridge. A conceptual four‐stage biogeomorphic model of TBL development with ecosystem engineering by the dwarf shrub Dryas octopetala as the dominant process can explain these interlinked patterns. Based on this model, we demonstrate that TBLs are biogeomorphic structures and follow a cyclic development, during which the role of their components for engineer and non‐engineer species changes. Our study presents the first biogeomorphic model of TBL development and highlights the applicability and necessity of biogeomorphic approaches and research in periglacial environments. Copyright © 2016 John Wiley & Sons, Ltd.
Due to the rise of laser scanning the 3D geometry of plant architecture is easy to acquire. Nevertheless, an automated interpretation and, finally, the segmentation into functional groups are still difficult to achieve. Two barley plants were scanned in a time course, and the organs were separated by applying a histogram-based classification algorithm. The leaf organs were represented by meshing algorithms, while the stem organs were parameterized by a least-squares cylinder approximation. We introduced surface feature histograms with an accuracy of 96% for the separation of the barley organs, leaf and stem. This enables growth monitoring in a time course for barley plants. Its reliability was demonstrated by a comparison with manually fitted parameters with a correlation R2 = 0.99 for the leaf area and R2 = 0.98 for the cumulated stem height. A proof of concept has been given for its applicability for the detection of water stress in barley, where the extension growth of an irrigated and a non-irrigated plant has been monitored.
Hyperspectral imaging sensors have been introduced for measuring the health status of plants. Recently, they also have been used for close-range sensing of plant canopies with a highly complex architecture. However, the complex geometry of plants and their interaction with the illumination setting severely affect the spectral information obtained. Furthermore, the spatial component of analysis results gain in importance as higher plants are represented by multiple plant organs as leaves, stems and seed pods. The combination of hyperspectral images and 3D point clouds is a promising approach to face these problems. We present the generation and application of hyperspectral 3D plant models as a new, interesting application field for computer vision with a variety of challenging tasks. We sum up a geometric calibration method for hyperspectral pushbroom cameras using a reference object for the combination of spectral and spatial information. Furthermore, we show exemplarily new calibration and analysis methods enabled by the hyperspectral 3D models in an experiment with sugar beet plants. An improved normalization, a comparison of image and 3D analysis and the density estimation of infected surface points underline some of the new capabilities gained using this new data type. Based on such hyperspectral 3D models the effects of plant geometry and sensor configuration can be quantified and modeled. In future, reflectance models can be used B Jan Behmann to remove or weaken the geometry-related effects in hyperspectral images and, therefore, have the potential to improve automated plant phenotyping significantly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.