Abstract:Background
Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep … Show more
“…Considering an acquisition time of 15 s per ear (cleaning and imaging), it is greatly enhanced from comparable systems for which information is available in the literature: one minute per ear [ 21 ] and Warman, 2021). The choice of imaging 6 sides of the ear has been tested and proven trustworthy in this study for a precise measurement but could be reconsidered in case where faster acquisition time and less precision can be needed, that would greatly increase the throughput, comparable to single imaging systems [ 17 , 19 ]. Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras).…”
Section: Discussionmentioning
confidence: 99%
“…Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras). For the analysis and variable extractions, most pipelines do not extract as much information as the EARBOX system from non-destructive analysis of ears and yield better results, but with non-comparable hardware (A few seconds for both [ 15 ] and [ 19 ]. The benchmark done in this study shows that affordable laptop hardware (~ 1500 euros) can be used to extract masks and phenotypic variables from data acquired with the EARBOX with reasonable computing time (~ 2 min per ear).…”
Section: Discussionmentioning
confidence: 99%
“…The system developed here can also be distinguished by the phenotypic variability used to develop, train, and test its robustness. Most studies focus on specific colors and types of ears and grains [ 1 , 21 ] while a few explore a variability on commercial hybrids and various ear, grain and cob colors [ 13 , 15 , 19 , 22 ], or abortion phenotypes [ 12 ], but none investigate the whole range of these possibilities, specifically from water deficits at flowering coupled with cases of biotic stresses with one model training. The development of methods that allow to treat both healthy ears and ears suffering from drought or diseases is an important step forward to produce efficient selection and research tools for field application to tackle the image analysis bottleneck of phenomics [ 36 ].…”
Section: Discussionmentioning
confidence: 99%
“…It is a highly flexible, trainable framework that has been widely validated in many scientific domains, including plant science [ 12 , 27 – 32 ]. More specific to maize, Kienbaum et al [ 19 ] have trained mask R-CNN for cob segmentation on images and have highlighted the interest of this type of framework over classical image analysis techniques: its robustness and accuracy.…”
Background
Characterizing plant genetic resources and their response to the environment through accurate measurement of relevant traits is crucial to genetics and breeding. Spatial organization of the maize ear provides insights into the response of grain yield to environmental conditions. Current automated methods for phenotyping the maize ear do not capture these spatial features.
Results
We developed EARBOX, a low-cost, open-source system for automated phenotyping of maize ears. EARBOX integrates open-source technologies for both software and hardware that facilitate its deployment and improvement for specific research questions. The imaging platform consists of a customized box in which ears are repeatedly imaged as they rotate via motorized rollers. With deep learning based on convolutional neural networks, the image analysis algorithm uses a two-step procedure: ear-specific grain masks are first created and subsequently used to extract a range of trait data per ear, including ear shape and dimensions, the number of grains and their spatial organisation, and the distribution of grain dimensions along the ear. The reliability of each trait was validated against ground-truth data from manual measurements. Moreover, EARBOX derives novel traits, inaccessible through conventional methods, especially the distribution of grain dimensions along grain cohorts, relevant for ear morphogenesis, and the distribution of abortion frequency along the ear, relevant for plant response to stress, especially soil water deficit.
Conclusions
The proposed system provides robust and accurate measurements of maize ear traits including spatial features. Future developments include grain type and colour categorisation. This method opens avenues for high-throughput genetic or functional studies in the context of plant adaptation to a changing environment.
“…Considering an acquisition time of 15 s per ear (cleaning and imaging), it is greatly enhanced from comparable systems for which information is available in the literature: one minute per ear [ 21 ] and Warman, 2021). The choice of imaging 6 sides of the ear has been tested and proven trustworthy in this study for a precise measurement but could be reconsidered in case where faster acquisition time and less precision can be needed, that would greatly increase the throughput, comparable to single imaging systems [ 17 , 19 ]. Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras).…”
Section: Discussionmentioning
confidence: 99%
“…Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras). For the analysis and variable extractions, most pipelines do not extract as much information as the EARBOX system from non-destructive analysis of ears and yield better results, but with non-comparable hardware (A few seconds for both [ 15 ] and [ 19 ]. The benchmark done in this study shows that affordable laptop hardware (~ 1500 euros) can be used to extract masks and phenotypic variables from data acquired with the EARBOX with reasonable computing time (~ 2 min per ear).…”
Section: Discussionmentioning
confidence: 99%
“…The system developed here can also be distinguished by the phenotypic variability used to develop, train, and test its robustness. Most studies focus on specific colors and types of ears and grains [ 1 , 21 ] while a few explore a variability on commercial hybrids and various ear, grain and cob colors [ 13 , 15 , 19 , 22 ], or abortion phenotypes [ 12 ], but none investigate the whole range of these possibilities, specifically from water deficits at flowering coupled with cases of biotic stresses with one model training. The development of methods that allow to treat both healthy ears and ears suffering from drought or diseases is an important step forward to produce efficient selection and research tools for field application to tackle the image analysis bottleneck of phenomics [ 36 ].…”
Section: Discussionmentioning
confidence: 99%
“…It is a highly flexible, trainable framework that has been widely validated in many scientific domains, including plant science [ 12 , 27 – 32 ]. More specific to maize, Kienbaum et al [ 19 ] have trained mask R-CNN for cob segmentation on images and have highlighted the interest of this type of framework over classical image analysis techniques: its robustness and accuracy.…”
Background
Characterizing plant genetic resources and their response to the environment through accurate measurement of relevant traits is crucial to genetics and breeding. Spatial organization of the maize ear provides insights into the response of grain yield to environmental conditions. Current automated methods for phenotyping the maize ear do not capture these spatial features.
Results
We developed EARBOX, a low-cost, open-source system for automated phenotyping of maize ears. EARBOX integrates open-source technologies for both software and hardware that facilitate its deployment and improvement for specific research questions. The imaging platform consists of a customized box in which ears are repeatedly imaged as they rotate via motorized rollers. With deep learning based on convolutional neural networks, the image analysis algorithm uses a two-step procedure: ear-specific grain masks are first created and subsequently used to extract a range of trait data per ear, including ear shape and dimensions, the number of grains and their spatial organisation, and the distribution of grain dimensions along the ear. The reliability of each trait was validated against ground-truth data from manual measurements. Moreover, EARBOX derives novel traits, inaccessible through conventional methods, especially the distribution of grain dimensions along grain cohorts, relevant for ear morphogenesis, and the distribution of abortion frequency along the ear, relevant for plant response to stress, especially soil water deficit.
Conclusions
The proposed system provides robust and accurate measurements of maize ear traits including spatial features. Future developments include grain type and colour categorisation. This method opens avenues for high-throughput genetic or functional studies in the context of plant adaptation to a changing environment.
“…However, the image processing domain is not without bottlenecks. To overcome the challenge of the segmentation of plant body parts in plant images, Kienbaum et al (2021) have used multiple preprocessing operations. For example, a linear or polynomial thresholding function may be applied to plant images to correctly identify shoot area, canopy temperature, and vegetation indices, among other things.…”
The workflow of this research is based on numerous hypotheses involving the usage of pre-processing methods, wheat canopy segmentation methods, and whether the existing models from the past research can be adapted to classify wheat crop water stress. Hence, to construct an automation model for water stress detection, it was found that pre-processing operations known as total variation with L1 data fidelity term (TV-L1) denoising with a Primal-Dual algorithm and min-max contrast stretching are most useful. For wheat canopy segmentation curve fit based K-means algorithm (Cfit-kmeans) was also validated for the most accurate segmentation using intersection over union metric. For automated water stress detection, rapid prototyping of machine learning models revealed that there is a need only to explore nine models. After extensive grid search-based hyper-parameter tuning of machine learning algorithms and 10 K fold cross validation it was found that out of nine different machine algorithms tested, the random forest algorithm has the highest global diagnostic accuracy of 91.164% and is the most suitable for constructing water stress detection models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.