2017
DOI: 10.1080/2150704x.2016.1274443
|View full text |Cite
|
Sign up to set email alerts
|

Urban landcover classification from multispectral image data using optimized AdaBoosted random forests

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…For the same classification problem, different classifiers have different classification results. Some researchers have reported that even though no remarkable difference exists amongst different classifiers at the overall level, such dissimilarities can be found at the per-class level [16]- [19], [21]- [22]. Such situation is suitable for the base classifiers of DLS.…”
Section: A Diversity Of Base Classifiers In Dslmentioning
confidence: 93%
“…For the same classification problem, different classifiers have different classification results. Some researchers have reported that even though no remarkable difference exists amongst different classifiers at the overall level, such dissimilarities can be found at the per-class level [16]- [19], [21]- [22]. Such situation is suitable for the base classifiers of DLS.…”
Section: A Diversity Of Base Classifiers In Dslmentioning
confidence: 93%
“…Overall, the random forest model's classification power is strongly influenced by the number of subtrees used, as a result of its diverse characteristics; therefore, this parameter should be carefully considered. As suggested in previous works [63][64][65], a total of 500 subtrees were used in this project.…”
Section: Landsat Image Classification and Accuracy Assessmentmentioning
confidence: 99%
“…e key to the algorithm is to determine the number of variables and the number of decision trees. e algorithm does not overfit as the number of trees increases, has good generalization performance, is more robust, and is suitable for dealing with high-dimensional, nonlinear complex problems [25,26].…”
Section: Random Forest (Rf) Algorithm Random Forest Wasmentioning
confidence: 99%
“…However, it is not the case that the larger the two parameters are, the higher the accuracy is. After several experiments, the model accuracy regression is higher and stable when k ∈ [220, 290] and m ∈ [18,26] according to the robustness of the results. e final test found that the effect was optimal when k � 260 and m � 22. e comparison of the effectiveness of the above six machine learning algorithm methods is shown in Figure 6.…”
Section: Machine Learning Modelsmentioning
confidence: 99%