2018
DOI: 10.1016/j.isprsjprs.2018.04.002
|View full text |Cite
|
Sign up to set email alerts
|

A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
134
2
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 148 publications
(141 citation statements)
references
References 110 publications
3
134
2
2
Order By: Relevance
“…To ensure the accuracy of assessment, 640 reference points representing 40% of the training sample were used to represent all land use classes. Then, Google Earth images were used as reference data to access the accuracy of classified maps [39,40].…”
Section: Land Use/cover Changesmentioning
confidence: 99%
“…To ensure the accuracy of assessment, 640 reference points representing 40% of the training sample were used to represent all land use classes. Then, Google Earth images were used as reference data to access the accuracy of classified maps [39,40].…”
Section: Land Use/cover Changesmentioning
confidence: 99%
“…Similar to the previous works in the literature [36,37], we randomly select 80% images from each class to train the SVM model and the remained 20% images are used for testing. According to [43], overall accuracy and confusion matrix are usually adopted as the metrics for accuracy assessment. Other related works such as [44] also report measures that derive from the confusion matrix, in which the Bradley-Terry Model was used to quantify association in remotely sensed images.…”
Section: Results For Remote Sensing Scene Classificationmentioning
confidence: 99%
“…The area under this curve (AUC) of an ROC/TOC plot is often used as a single measure of overall accuracy that summarizes numerous thresholds for the continuous variable [96]. There are also metrics for assessing the accuracy of object-based image analysis (OBIA, [97]), which we do not cover here (but see the supplementary information (SI)) because the choice of measure varies according to mapping objectives [65,98].…”
Section: Map Accuracy Assessment Proceduresmentioning
confidence: 99%
“…However, in practice this accounting is rarely done, while map reference data uncertainty is also rarely examined [34,38,57]. This tendency is illustrated by Ye et al [65], who reviewed 209 journal articles focused on object-based image analysis, finding that one third gave incomplete information about the sample design and size of their map reference data, let alone any mention of error within the sample. Errors in map reference data can bias the map accuracy assessment [47,99], as well as estimates derived from the confusion matrix, such as land cover class proportions and their standard errors [46].…”
Section: Map Accuracy Assessment Proceduresmentioning
confidence: 99%