2006
DOI: 10.1080/01431160500275762
|View full text |Cite
|
Sign up to set email alerts
|

Comparing accuracy assessments to infer superiority of image classification methods

Abstract: The z-test based on the Kappa statistic is commonly used to infer superiority of one map production method over another. Typically the same reference data set is used to calculate and next compare the Kappa's of the two maps. This data structure easily leads to dependence between the two error-matrices. This may result in overly large variance estimates and too conservative inference about the difference in accuracy between the two methods. Tests considering the dependency between the error matrices would be m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0
3

Year Published

2008
2008
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(72 citation statements)
references
References 8 publications
(11 reference statements)
0
69
0
3
Order By: Relevance
“…However, this may not be appropriate if the same sample of sites is used in the comparison [29][30][31], as these coefficients assume that the samples used in their calculations are independent. We have used the same set of reference points for testing the accuracy of maps produced by MLC and PCC methods for the same year, to avoid the difference of accuracy due to sampling variability.…”
Section: Comparing Classifier Performancementioning
confidence: 99%
See 1 more Smart Citation
“…However, this may not be appropriate if the same sample of sites is used in the comparison [29][30][31], as these coefficients assume that the samples used in their calculations are independent. We have used the same set of reference points for testing the accuracy of maps produced by MLC and PCC methods for the same year, to avoid the difference of accuracy due to sampling variability.…”
Section: Comparing Classifier Performancementioning
confidence: 99%
“…We have used the same set of reference points for testing the accuracy of maps produced by MLC and PCC methods for the same year, to avoid the difference of accuracy due to sampling variability. For this reason, we have performed McNemar's test [30] to evaluate the superiority of the LULC maps resulting from post-classification over the MLC classified maps. The McNemar's test is preferable because it is parametric, very simple to understand and execute.…”
Section: Comparing Classifier Performancementioning
confidence: 99%
“…Accuracy and comparison were determined by overall accuracy coefficient and Kappa index. Significance from Kappa indexes was analyzed through Z testing, which enabled to verify if classification in the neural networks was considered better than a random classification (De Leeuw et al, 2006). The selected neural network was applied to fruits in the 10 maturity weeks evaluated, and the percentage of classified fruits pertaining to Immature class (A) and Matured class (B).…”
Section: Fruit Classification To Determine Harvest Moment In Functionmentioning
confidence: 99%
“…McNemar's test allows separate classifications that were assessed using the same set of accuracy points to be compared without bias. McNemar's test is a non-parametric test based on a binary distinction between correct and incorrect class allocations that uses a 2 by 2 matrix to calculate a chi squared value [35,36]. McNemar's test of the multispectral NAIP, NDVI, and CHM classification compared to the multispectral NAIP and NDVI classification shows that the classifications are statistically different at the p = 0.01 level.…”
Section: Significance Of Including Lidar Datamentioning
confidence: 99%