2014
DOI: 10.1109/tcyb.2013.2266336
|View full text |Cite
|
Sign up to set email alerts
|

Projection-Based Ensemble Learning for Ordinal Regression

Abstract: Abstract-The classification of patterns into naturally ordered labels is referred to as ordinal regression. This paper proposes an ensemble methodology specifically adapted to this type of problems, which is based on computing different classification tasks through the formulation of different order hypotheses. Every single model is trained in order to distinguish between one given class (k) and all the remaining ones, but grouping them in those classes with a rank lower than k, and those with a rank higher th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 40 publications
(23 citation statements)
references
References 30 publications
(54 reference statements)
0
23
0
Order By: Relevance
“…Finally, another possibility [67] is to derive a classifier for each class but separating the labels into groups of three classes (instead of only two) for intermediate subtasks (labels lower than C q , label C q , and labels higher than C q ), or two classes for the extreme ones. The objective is to incorporate the order information in the subclassification tasks.…”
Section: Multiple Model Approachesmentioning
confidence: 99%
“…Finally, another possibility [67] is to derive a classifier for each class but separating the labels into groups of three classes (instead of only two) for intermediate subtasks (labels lower than C q , label C q , and labels higher than C q ), or two classes for the extreme ones. The objective is to incorporate the order information in the subclassification tasks.…”
Section: Multiple Model Approachesmentioning
confidence: 99%
“…Two main conclusions can be drawn from the study: on the first hand, the fact that the exploitation of the underlying latent manifold via shortest paths is useful to perform the over-sampling process and, on the other hand, the notion that a cost-sensitive approach may in general improve the base performance, but still without reaching the results of over-sampling methods. As future work, the ensemble proposed in [13], which has been shown to perform well for imbalanced metrics, could be tested in conjunction with the ordinal imbalanced methods developed in this paper. Furthermore, techniques designed to extract the underlying manifold without the construction of a neighbourhood graph could be explored, because a notion of distance is assumed for the neighbourhood graph construction which may not actually hold for the manifold, and the patterns could be over-sampled according to the geodesic distance specified by the constructed manifold.…”
Section: Discussionmentioning
confidence: 99%
“…However, to the best of our knowledge, the case of imbalanced ordinal classification problems has not been tackled yet, despite its relevance for real world applications. Nonetheless, there are some methods that have been shown to work well in general for several ordinal and imbalanced metrics, such as the ensemble approach in [13] where various order hypotheses were formulated and fused (although the method was not specifically designed for the imbalanced setting and no experiments were performed in highly imbalanced datasets).…”
Section: Introductionmentioning
confidence: 99%
“…The combination of decomposed labels, weights per pattern and SVM base methodology will be referred in the experimental section as SVM with ordinal decompositions (SVMOD). A reformulation of the extreme learning machine, called extreme learning machine for ordinal regression (ELMOR) [9], uses the one-of-K coding matrix for the outputs (commonly used with artificial neural networks) and considers whether a pattern belongs to a class greater than a fixed k. Finally, the ensemble learning for ordinal regression with product combiner and SVM (EPSVM) combines binary and ternary classification tasks, trying to distinguish each class from the previous and subsequent ones and making use of a probability fusion function [29].…”
Section: Decomposition Methodsmentioning
confidence: 99%