2019
DOI: 10.3390/rs11243040
|View full text |Cite
|
Sign up to set email alerts
|

Imbalanced Learning in Land Cover Classification: Improving Minority Classes’ Prediction Accuracy Using the Geometric SMOTE Algorithm

Abstract: The automatic production of land use/land cover maps continues to be a challenging problem, with important impacts on the ability to promote sustainability and good resource management. The ability to build robust automatic classifiers and produce accurate maps can have a significant impact on the way we manage and optimize natural resources. The difficulty in achieving these results comes from many different factors, such as data quality and uncertainty. In this paper, we address the imbalanced learning probl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
50
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(52 citation statements)
references
References 35 publications
2
50
0
Order By: Relevance
“…This happened when testing against classification thresholding, class weighting and against a baseline experiment where the class imbalance was not taken into account. These results are in agreement with the literature when addressing the class imbalance problem: oversampling approaches often are the best performing ones (Johnson et al, 2013;Buda et al, 2018;Douzas et al, 2019), even if in this case a binary classification problem was considered and an imbalance ratio of around 1:100. Nonetheless the precision-recall curve indicates a smaller difference between oversampling and baseline experiment where the latter even had a marginal improvement when it comes to the pr-AUC.…”
Section: Discussionsupporting
confidence: 90%
See 2 more Smart Citations
“…This happened when testing against classification thresholding, class weighting and against a baseline experiment where the class imbalance was not taken into account. These results are in agreement with the literature when addressing the class imbalance problem: oversampling approaches often are the best performing ones (Johnson et al, 2013;Buda et al, 2018;Douzas et al, 2019), even if in this case a binary classification problem was considered and an imbalance ratio of around 1:100. Nonetheless the precision-recall curve indicates a smaller difference between oversampling and baseline experiment where the latter even had a marginal improvement when it comes to the pr-AUC.…”
Section: Discussionsupporting
confidence: 90%
“…Data level methods focus on the training data and on class distribution to reduce/eliminate class imbalance. Two main approaches can be distinguished: oversampling (Johnson et al, 2013;Douzas et al, 2019) and undersampling (Leichtle et al, 2017;Buda et al, 2018). These methods use new image samples generation (oversampling) or modify the subset sample selection (undersampling).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To statistically compare model performance with multiple datasets, Demšar [58] and Garcia and Herrera [59] suggested the Wilcoxon paired rank test and Friedman test, which are nonparametric. The Wilcoxon paired signed-rank test was used only when two models were compared [60][61][62]. The Friedman test was used for the multiple model comparisons [63][64][65] since multiple The first and second rows show the rate of occurrence of line and polygon graphs as density using the reference data.…”
Section: Lake Tappsmentioning
confidence: 99%
“…Random oversampling is simply copying the sample of the minority class, which easily leads to overfitting [44] and has little effect on improving the classification accuracy of the minority class. The synthetic minority oversampling technique (SMOTE) is a powerful algorithm that was proposed by Chawla [29] and has shown a great deal of success in various applications [45][46][47]. SMOTE will be described in detail in Section 2.1.…”
mentioning
confidence: 99%