2014
DOI: 10.1061/(asce)cp.1943-5487.0000276
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Methods for Binary Classifications of Airborne LIDAR Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…The study of Niemeyer et al (2011) reported that it took 202 min and 252 min to train the CRF and MRF, respectively, with a study area of 1.93 ha. Some emerging research attempted to use ensemble learning (or multiple classifier systems) such as AdaBoost (Lodha et al, 2007;Nourzad & Pradhan, 2014), Cascade Binary Classifier (Carlberg, Gao, Chen, & Zakhor, 2009), Weighted Majority Voting (Samadzadegan et al, 2010), and Random Forests (Chehata et al, 2009;Guo et al, 2011) for both discrete return and waveform LiDAR data, which still produced desirable classification accuracy (80% to 90%).…”
Section: Classification Techniquesmentioning
confidence: 98%
“…The study of Niemeyer et al (2011) reported that it took 202 min and 252 min to train the CRF and MRF, respectively, with a study area of 1.93 ha. Some emerging research attempted to use ensemble learning (or multiple classifier systems) such as AdaBoost (Lodha et al, 2007;Nourzad & Pradhan, 2014), Cascade Binary Classifier (Carlberg, Gao, Chen, & Zakhor, 2009), Weighted Majority Voting (Samadzadegan et al, 2010), and Random Forests (Chehata et al, 2009;Guo et al, 2011) for both discrete return and waveform LiDAR data, which still produced desirable classification accuracy (80% to 90%).…”
Section: Classification Techniquesmentioning
confidence: 98%
“…Ensemble learning, also known as multiple classifier systems, has become an influential solution overshadowing not only multi-class imbalance learning but also two-class imbalance problems and standard classification, as discussed in [48] with regard to the boosting algorithms primarily designed for binary classification. In the literature, different researchers agreed on the versatility and effectiveness of ensemble-based learning techniques, where several component classifier predictions were combined to make a final prediction report, improving the performance of individual weak learners with a small training dataset to build an improved classification-learning model.…”
Section: Ensemble-based Methodsmentioning
confidence: 99%
“…On top of that, machine learning approaches also apply clustering approaches to define planar segments and further features on these planar segments (Lu et al, 2009). Most common techniques are conditional random field (Lu et al, 2009, Niemeyer et al, 2012, support vector machines (Mewhort, 2013) and random forest (Ni et al, 2017) but also ad-aBoost and bagging (Nourzad and Pradhan, 2014).…”
Section: Machine Learning Approachesmentioning
confidence: 99%