2020
DOI: 10.5935/1806-6690.20200073
|View full text |Cite
|
Sign up to set email alerts
|

USPLeaf: Automatic leaf area determination using a computer vision system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Considering the maneuverability of track enhancement in the actual process, this paper reduced the resolution of the original track map, converted RGB color space into YCbCr color space, and carried out edge detection by Y component, Cb component, and Cr component, respectively, to extract edge feature information of the image and mark edge points. Then, the pixels to be interpolated and replaced are divided into textured direction pixels and weakly textured pixels, that is, the points in the edge region and the inner region of the contour [27]. The lost pixels estimated by the first scan and interpolation were denoted as I h ð2i, 2jÞ, the lost pixels estimated by the second scan and interpolation were denoted as I h ð2i − 1, 2jÞ, and the lost pixels estimated by the third scan and interpolation were denoted as I h ð2i, 2j − 1Þ.…”
Section: Extracting Edge Information Of Sports Video Images Inmentioning
confidence: 99%
“…Considering the maneuverability of track enhancement in the actual process, this paper reduced the resolution of the original track map, converted RGB color space into YCbCr color space, and carried out edge detection by Y component, Cb component, and Cr component, respectively, to extract edge feature information of the image and mark edge points. Then, the pixels to be interpolated and replaced are divided into textured direction pixels and weakly textured pixels, that is, the points in the edge region and the inner region of the contour [27]. The lost pixels estimated by the first scan and interpolation were denoted as I h ð2i, 2jÞ, the lost pixels estimated by the second scan and interpolation were denoted as I h ð2i − 1, 2jÞ, and the lost pixels estimated by the third scan and interpolation were denoted as I h ð2i, 2j − 1Þ.…”
Section: Extracting Edge Information Of Sports Video Images Inmentioning
confidence: 99%
“…Comparisons of leaf area measurements errors between methods on a set of 145 leaves belonging to 69 species.The accuracy of FAMeLeS on images from field sampling mixed with web images attests to the efficiency of the method even with images from multiple devices (scanner or photographs). While some authors warned about the limitation of their method depending on image quality(Meira et al, 2020) or brightness(Haqqiman Radzali et al, 2016), FAMeLeS has proven to be very tolerant of the di-versity of image sizes (compressed or not), resolutions and lighting conditions which makes it possible to work on existing leaf image databases or to aggregate data sets from different sources. The combination of such a high tolerance to image type and leaf characteristics offers the opportunity to build standardized database of leaf binary images from any biome, and to produce high amounts of data to address functional ecology questions, to study biogeographical patterns and drivers of morphological leaf traits and to feed allometric models and algorithms of plant leaf classification.…”
mentioning
confidence: 99%