2019
DOI: 10.18494/sam.2019.2584
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of Generalized Intersection over Union and Error Matrix for Vegetation Cover Classification Assessment

Abstract: The result of vegetation cover classification greatly depends on the classification methods. Accuracy analysis is mostly performed using the error matrix in remote sensing. In recent remote sensing, image classification has been carried out on the basis of deep learning. In the field of image processing in computer science, Intersection over Union (IoU) is mainly used for accuracy analysis. In this study, the error matrix, which is frequently used in remote sensing, and IoU, which is mainly used for deep learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…The key bone developmental grades and locations were the categories we sorted the results into following the KBS. We assessed location accuracy using the Intersection over Union (IOU) ( 33 ). After plotting the confusion matrix, we calculated the key bone developmental grade classification data’s accuracy and precision (weighted average).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The key bone developmental grades and locations were the categories we sorted the results into following the KBS. We assessed location accuracy using the Intersection over Union (IOU) ( 33 ). After plotting the confusion matrix, we calculated the key bone developmental grade classification data’s accuracy and precision (weighted average).…”
Section: Methodsmentioning
confidence: 99%
“…In the first part, Both the PP-PicoDet and NanoDet models are anchor-free models, while the YOLOv5 model employed the Kmeans method to obtain anchors such as [[23,24, 27,28, 26,34], [32,33,31,41,37,38], [38,48,54,58,66,69]]. The images were preprocessed before model training, including resizing the images to correspond to the size required by the model (640x640 for YOLOv5, 416x416 for PP-PicoDet, and 416x416 for NanoDet) and normalizing the images to a range of pixel values of (0, 1).…”
Section: Training Modelmentioning
confidence: 99%
“…Correlation is also highly sensitive to non-linear relationship, noise, subgroups and outliers [51,52] making incorrect evaluation. According to [53,54], the dice score and the mean intersection over union (mIoU) are more adapted to evaluate the segmentation mask. Defined by:…”
Section: Performance Evaluationmentioning
confidence: 99%
“…The higher OA value, the more correctly classified samples among the entire samples among the dataset. IoU (Equation (2)) was a parameter calculated by the number of samples intersection divided by that of union [64], or the ratio between the number of true positive and the sum of the number of true positives, false positives, false negatives [65]. The higher IoU of ith class (IoU i ), the fewer samples with wrong predictions or misclassified of ith class.…”
Section: Semantic Segmentation Performance Evaluationmentioning
confidence: 99%