The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1186/s13007-021-00787-6
|View full text |Cite
|
Sign up to set email alerts
|

DeepCob: precise and high-throughput analysis of maize cob geometry using deep learning with an application in genebank phenomics

Abstract: Background Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 60 publications
(57 reference statements)
0
7
0
Order By: Relevance
“…Considering an acquisition time of 15 s per ear (cleaning and imaging), it is greatly enhanced from comparable systems for which information is available in the literature: one minute per ear [ 21 ] and Warman, 2021). The choice of imaging 6 sides of the ear has been tested and proven trustworthy in this study for a precise measurement but could be reconsidered in case where faster acquisition time and less precision can be needed, that would greatly increase the throughput, comparable to single imaging systems [ 17 , 19 ]. Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Considering an acquisition time of 15 s per ear (cleaning and imaging), it is greatly enhanced from comparable systems for which information is available in the literature: one minute per ear [ 21 ] and Warman, 2021). The choice of imaging 6 sides of the ear has been tested and proven trustworthy in this study for a precise measurement but could be reconsidered in case where faster acquisition time and less precision can be needed, that would greatly increase the throughput, comparable to single imaging systems [ 17 , 19 ]. Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras).…”
Section: Discussionmentioning
confidence: 99%
“…Nonetheless, IR images being a basis of the analysis to normalize ear and grain colors, the pipeline developed and presented in this study cannot be used as is to extract phenotypic variables from simpler RGB imaging systems alone (ex: smartphone pictures taken in the field with common RGB cameras). For the analysis and variable extractions, most pipelines do not extract as much information as the EARBOX system from non-destructive analysis of ears and yield better results, but with non-comparable hardware (A few seconds for both [ 15 ] and [ 19 ]. The benchmark done in this study shows that affordable laptop hardware (~ 1500 euros) can be used to extract masks and phenotypic variables from data acquired with the EARBOX with reasonable computing time (~ 2 min per ear).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the image processing domain is not without bottlenecks. To overcome the challenge of the segmentation of plant body parts in plant images, Kienbaum et al (2021) have used multiple preprocessing operations. For example, a linear or polynomial thresholding function may be applied to plant images to correctly identify shoot area, canopy temperature, and vegetation indices, among other things.…”
Section: Introductionmentioning
confidence: 99%