2012
DOI: 10.1016/j.isprsjprs.2011.11.002
|View full text |Cite
|
Sign up to set email alerts
|

An assessment of the effectiveness of a random forest classifier for land-cover classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

30
1,076
3
23

Year Published

2016
2016
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 2,166 publications
(1,283 citation statements)
references
References 64 publications
30
1,076
3
23
Order By: Relevance
“…Our results clearly show this actually introduces bias and we advocate for future users of random forest to carefully select training data that are representative and proportional to the composition of actual landscape. Second, following other researchers who found that random forest outperforms other methods for prediction and classification (e.g., Cushman et al 2010;Evans et al 2011;Rodriguez-Galiano et al 2012;Schneider 2012), we expected that random forest would outperform logistic regression and the naïve model based on distance to forest edge. Consistent with this hypothesis, random forest was the highest performing method in all three nations comprising Borneo.…”
Section: Discussionmentioning
confidence: 76%
“…Our results clearly show this actually introduces bias and we advocate for future users of random forest to carefully select training data that are representative and proportional to the composition of actual landscape. Second, following other researchers who found that random forest outperforms other methods for prediction and classification (e.g., Cushman et al 2010;Evans et al 2011;Rodriguez-Galiano et al 2012;Schneider 2012), we expected that random forest would outperform logistic regression and the naïve model based on distance to forest edge. Consistent with this hypothesis, random forest was the highest performing method in all three nations comprising Borneo.…”
Section: Discussionmentioning
confidence: 76%
“…The principle of ensemble classifiers is that a large collection of weaker classifiers (individual CTs in this case) can be used to create a strong classifier. RF involves the construction of large number of individual trees from the training data (Rodriguez‐Galiano, Ghimire, Rogan, Chica‐Olmo, & Rigol‐Sanchez, 2012). How the trees are constructed differs from CT in that a random selection of training data is used for each tree so that each tree is trained on a different set of data.…”
Section: Methodsmentioning
confidence: 99%
“…RF was selected for this study because it generally outperforms conventional classifiers such as the Gaussian maximum likelihood classifier [61,62], while performing favorably, or equally well, to other non-parametric approaches; e.g., CART [63,64], Support Vector Machines [32,65,66], Artificial Neural Networks [67], and K-Nearest Neighbor [68]. It is a powerful non-linear and non-parametric classifier that allows for fusion and aggregation of high-dimensional data from various sources (e.g., optical, SAR, and topography [30,69,70]; SAR and topography [21,58,71]; and optical and topography [72][73][74]).…”
Section: Image Classificationmentioning
confidence: 99%