2015
DOI: 10.1109/jstars.2015.2407493
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Ensemble Selection Approach for Hyperspectral Image Classification With Joint Spectral and Spatial Information

Abstract: Accurate generation of a land cover map using hyperspectral data is an important application of remote sensing. Multiple classifier system (MCS) is an effective tool for hyperspectral image classification. However, most of the research in MCS addressed the problem of classifier combination, while the potential of selecting classifiers dynamically is least explored for hyperspectral image classification. The goal of this paper is to assess the potential of dynamic classifier selection/dynamic ensemble selection… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(21 citation statements)
references
References 46 publications
0
21
0
Order By: Relevance
“…where s ∈ R + 0 , n(c) and n(c, f i ) hold the same definition in Section II. In practical application, we initiate s from 0 and increase its value over a specific scale (we use 0.1) in each iteration until s does not satisfy (9). We will use this perturbation threshold as an indicator to provide selection strategies for R-DCS in the following section.…”
Section: A Computation Of Perturbation Thresholds For Nccsmentioning
confidence: 99%
See 1 more Smart Citation
“…where s ∈ R + 0 , n(c) and n(c, f i ) hold the same definition in Section II. In practical application, we initiate s from 0 and increase its value over a specific scale (we use 0.1) in each iteration until s does not satisfy (9). We will use this perturbation threshold as an indicator to provide selection strategies for R-DCS in the following section.…”
Section: A Computation Of Perturbation Thresholds For Nccsmentioning
confidence: 99%
“…Usually, this classifier choice is made based on a local region of the feature space where the query sample is located in. Most works define this local region by applying the K-Nearest Neighbors technique, which groups samples with similar features to construct a local region [8], [9]. In this work, we group samples differently, by incorporating the concept of robustness to the model specification.…”
Section: Introductionmentioning
confidence: 99%
“…Such technique has been applied for remote sensing datasets [18]. However, most of the remote sensing applications use CRF to enforce smoothness over adjacent regions and increase the classification accuracy (known as the Potts model) [19,20]. This is mainly due to the extremely costly and time-consuming training of a complex model and learning its parameters, as it often requires manual annotation of full scenes.…”
Section: Context Featuresmentioning
confidence: 99%
“…Some features contain less discriminatory information than others, which are not useful for producing a desired learning result, and the limited observations may lead the learning algorithm to overfit to the noise. Therefore, to achieve an excellent classification performance, a dimensionality reduction (DR) [33][34][35][36] procedure is required before training the classifier, which is used to reduce computational complexity and improve classification accuracy. The common dimensionality reduction method can be summarized as feature selection and feature extraction.…”
Section: Introductionmentioning
confidence: 99%