2017
DOI: 10.1186/s13007-017-0253-8
|View full text |Cite
|
Sign up to set email alerts
|

Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping

Abstract: BackgroundAccurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments.ResultsIn this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
37
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 34 publications
(54 reference statements)
1
37
0
Order By: Relevance
“…In order to target the canopy objects, it is significant to separate the regions of vegetation from the background. In previous studies, colour features converted from the original R, G and B channels have been used to enhance the contrast between green canopy and the background [28, 29], and a more sophisticated method employing machine learning model could perform the segmentation with 21 colour features [30]. For UAV remote sensing, the original RGB image was normally transformed into a greyscale image of a Vegetation Index (VI) [31, 33].…”
Section: Methodsmentioning
confidence: 99%
“…In order to target the canopy objects, it is significant to separate the regions of vegetation from the background. In previous studies, colour features converted from the original R, G and B channels have been used to enhance the contrast between green canopy and the background [28, 29], and a more sophisticated method employing machine learning model could perform the segmentation with 21 colour features [30]. For UAV remote sensing, the original RGB image was normally transformed into a greyscale image of a Vegetation Index (VI) [31, 33].…”
Section: Methodsmentioning
confidence: 99%
“…To dissect green canopy from background variables, the Excess Green Red (ExGR) index was used (Equation (6)), with a threshold of > 0 to classify green vegetation [39,40]. Figure 6 shows an example of the produced mask, with reasonable agreement between visual green canopy and pixels classified as green by ExGR.…”
Section: Canopy Maskingmentioning
confidence: 99%
“…The method, based mainly on the color features of the images, achieved 90% accuracy with respect to the actual number of ears. Also for wheat, Sadeghi-Tehran et al [66] developed a deep-learning model for CC estimation using automatic segmentation of RGB images taken on wheat plots. The proposed method was more robust and accurate than other classical and machine-learning methodologies.…”
Section: Novelty Of the DL Model Against Current Approaches Used Formentioning
confidence: 99%