2016
DOI: 10.5121/ijaia.2016.7201
|View full text |Cite
|
Sign up to set email alerts
|

Cross Dataset Evaluation of Feature Extraction Techniques for Leaf Classification

Abstract: ABSTRACT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…However, our main goal is to apply the described methods to cross dataset validation. Comparable to [18] we want to study how a trained CNN applies to a real world scenario, i.e. classifying leaves that were collected completely independent of the test set and therefore differ quite a lot due influences of the temperature, rainfall, solar irradiation or seasons.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, our main goal is to apply the described methods to cross dataset validation. Comparable to [18] we want to study how a trained CNN applies to a real world scenario, i.e. classifying leaves that were collected completely independent of the test set and therefore differ quite a lot due influences of the temperature, rainfall, solar irradiation or seasons.…”
Section: Discussionmentioning
confidence: 99%
“…Reul et al [18] use a 1-nearest-neighbor classifier based on contour, curvature, color, Hu, HOCS, and binary pattern features. They achieve accuracies of about 99.37% on the Flavia dataset and 95.83% on the Foliage dataset.…”
Section: Introductionmentioning
confidence: 99%
“…It also removes holes in leaf images, extracts curvatures from boundaries, and measures smooth as well as serrated margins. This technique was used in the paper [39] to extract the arc and area features of lobeshaped leaf margins, but it is not suitable for all leaves. This technique was also used for Costa Rican species as well in [40].…”
Section: Point and Edge-based Feature Descriptorsmentioning
confidence: 99%
“…Hu, shape, texture 100% [151] Back propagation neural network Shapes, Angles and sinus of leaves 111 leaves with 14 species [125] Texture and wavelet feature Grape varieties 93.3% [126] Colour Affected and Unaffected Vegetables [127] Texture of Colour Co-occurrence Disease Identification [128] Edge Features of leaf Neem, pine oak 90.33% [129] Leaf Margin Jasmine, arka, mango, neem and shigru 85% [130] Morphological features 450 leaves of 16 classes in Ayurveda and agriculture 90% [62] Texture Foxtail, crabgrass, velvet leaf, morning glory 97% [177] Fuzzy based classifiers Statistical features of leaf 97.6% [26] Texture, shape, colour 99.87% [86] K-nearest neighbour Edge, vein, ring projection wavelet feature 87.14% [120] Geometrical features 80% [134] Leaf Margin? texture 75.5% [135] Texton Costa Rican Flavia Dataset 99.1% [40] Texton 87.14% [120] Texture ICL-97.07% Plumber-72.8% Simthsonain-73.08% [37] HoCS, contour, colour, curvature Flavia 99.61% [39] Texture 97.55% [94] Run length sequence 93.17% [152] Contour-amplitude frequency descriptor Swedish-89.6% ICL-91.6% [198] Moving centre classifier Moment invariant 92.6% [137] Bayesian classifier Fourier descriptor 88% [116] Support vector machine HoCS Leafsnap [38] Wavelet features Ornamental Plants 95.83% [114] Fourier and texture Australian Federal dataset-100%, Flavia-99.7%, Foliage-99.8%, Swedish and Middle European datasets-99.2% [93] Kernel level descriptor Flavia-97.5% [110,111] Hu moments Annona Squamosa and Psidiuguajava, 86.6% [139] Lanculariity Flavia-95.048% [84] HOG? Zernike moments Sweedish-97 [192] Map reduce algorithm Texture Hierarchical big database-91% [196] 7.3 The ICL Dataset…”
Section: The Flavia Datasetmentioning
confidence: 99%
See 1 more Smart Citation