Genomics-assisted breeding methods have been rapidly developed with novel technologies such as next-generation sequencing, genomic selection and genome-wide association study. However, phenotyping is still time consuming and is a serious bottleneck in genomics-assisted breeding. In this study, we established a high-throughput phenotyping system for sorghum plant height and its response to nitrogen availability; this system relies on the use of unmanned aerial vehicle (UAV) remote sensing with either an RGB or near-infrared, green and blue (NIR-GB) camera. We evaluated the potential of remote sensing to provide phenotype training data in a genomic prediction model. UAV remote sensing with the NIR-GB camera and the 50th percentile of digital surface model, which is an indicator of height, performed well. The correlation coefficient between plant height measured by UAV remote sensing (PHUAV) and plant height measured with a ruler (PHR) was 0.523. Because PHUAV was overestimated (probably because of the presence of taller plants on adjacent plots), the correlation coefficient between PHUAV and PHR was increased to 0.678 by using one of the two replications (that with the lower PHUAV value). Genomic prediction modeling performed well under the low-fertilization condition, probably because PHUAV overestimation was smaller under this condition due to a lower plant height. The predicted values of PHUAV and PHR were highly correlated with each other (r = 0.842). This result suggests that the genomic prediction models generated with PHUAV were almost identical and that the performance of UAV remote sensing was similar to that of traditional measurements in genomic prediction modeling. UAV remote sensing has a high potential to increase the throughput of phenotyping and decrease its cost. UAV remote sensing will be an important and indispensable tool for high-throughput genomics-assisted plant breeding.
Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.
The detection of wheat heads in plant images is an important task for estimating pertinent wheat traits including head population density and head characteristics such as health, size, maturity stage, and the presence of awns. Several studies have developed methods for wheat head detection from high-resolution RGB imagery based on machine learning algorithms. However, these methods have generally been calibrated and validated on limited datasets. High variability in observational conditions, genotypic differences, development stages, and head orientation makes wheat head detection a challenge for computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse, and well-labelled dataset of wheat images, called the Global Wheat Head Detection (GWHD) dataset. It contains 4700 high-resolution RGB images and 190000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles, and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD dataset is publicly available at http://www.global-wheat.com/and aimed at developing and benchmarking methods for wheat head detection.
Sorghum (Sorghum bicolor L. Moench) is a C4 tropical grass that plays an essential role in providing nutrition to humans and livestock, particularly in marginal rainfall environments. The timing of head development and the number of heads per unit area are key adaptation traits to consider in agronomy and breeding but are time consuming and labor intensive to measure. We propose a two-step machine-based image processing method to detect and count the number of heads from high-resolution images captured by unmanned aerial vehicles (UAVs) in a breeding trial. To demonstrate the performance of the proposed method, 52 images were manually labeled; the precision and recall of head detection were 0.87 and 0.98, respectively, and the coefficient of determination (R2) between the manual and new methods of counting was 0.84. To verify the utility of the method in breeding programs, a geolocation-based plot segmentation method was applied to pre-processed ortho-mosaic images to extract >1000 plots from original RGB images. Forty of these plots were randomly selected and labeled manually; the precision and recall of detection were 0.82 and 0.98, respectively, and the coefficient of determination between manual and algorithm counting was 0.56, with the major source of error being related to the morphology of plants resulting in heads being displayed both within and outside the plot in which the plants were sown, i.e., being allocated to a neighboring plot. Finally, the potential applications in yield estimation from UAV-based imagery from agronomy experiments and scouting of production fields are also discussed.
Ground cover is an important physiological trait affecting crop radiation capture, water-use efficiency and grain yield. It is challenging to efficiently measure ground cover with reasonable precision for large numbers of plots, especially in tall crop species. Here we combined two image-based methods to estimate plot-level ground cover for three species, from either an ortho-mosaic or undistorted (i.e. corrected for lens and camera effects) images captured by cameras using a low-altitude unmanned aerial vehicle (UAV). Reconstructed point clouds and ortho-mosaics for the whole field were created and a customised image processing workflow was developed to (1) segment the ‘whole-field’ datasets into individual plots, and (2) ‘reverse-calculate’ each plot from each undistorted image. Ground cover for individual plots was calculated by an efficient vegetation segmentation algorithm. For 79% of plots, estimated ground cover was greater from the ortho-mosaic than from images, particularly when plants were small, or when older/taller in large plots. While there was a good agreement between the ground cover estimates from ortho-mosaic and images when the target plot was positioned at a near-nadir view near the centre of image (cotton: R2 = 0.97, sorghum: R2 = 0.98, sugarcane: R2 = 0.84), ortho-mosaic estimates were 5% greater than estimates from these near-nadir images. Because each plot appeared in multiple images, there were multiple estimates of the ground cover, some of which should be excluded, e.g. when the plot is near edge within an image. Considering only the images with near-nadir view, the reverse calculation provides a more precise estimate of ground cover compared with the ortho-mosaic. The methodology is suitable for high throughput phenotyping for applications in agronomy, physiology and breeding for different crop species and can be extended to provide pixel-level data from other types of cameras including thermal and multi-spectral models.
Unmanned aircraft system (UAS) is a particularly powerful tool for plant phenotyping, due to reasonable cost of procurement and deployment, ease and flexibility for control and operation, ability to reconfigure sensor payloads to diversify sensing, and the ability to seamlessly fit into a larger connected phenotyping network. These advantages have expanded the use of UAS-based plant phenotyping approach in research and breeding applications. This paper reviews the state of the art in the deployment, collection, curation, storage, and analysis of data from UAS-based phenotyping platforms. We discuss pressing technical challenges, identify future trends in UAS-based phenotyping that the plant research community should be aware of, and pinpoint key plant science and agronomic questions that can be resolved with the next generation of UAS-based imaging modalities and associated data analysis pipelines. This review provides a broad account of the state of the art in UAS-based phenotyping to reduce the barrier to entry to plant science practitioners interested in deploying this imaging modality for phenotyping in plant breeding and research areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.