2021
DOI: 10.3390/e23040420
|View full text |Cite
|
Sign up to set email alerts
|

Mixture-Based Probabilistic Graphical Models for the Label Ranking Problem

Abstract: The goal of the Label Ranking (LR) problem is to learn preference models that predict the preferred ranking of class labels for a given unlabeled instance. Different well-known machine learning algorithms have been adapted to deal with the LR problem. In particular, fine-tuned instance-based algorithms (e.g., k-nearest neighbors) and model-based algorithms (e.g., decision trees) have performed remarkably well in tackling the LR problem. Probabilistic Graphical Models (PGMs, e.g., Bayesian networks) have not be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 43 publications
(72 reference statements)
0
3
0
Order By: Relevance
“…Algorithms belonging to different machine learning paradigms (Zhou et al, 2014) have been proposed to tackle the LR problem: instance‐based learning (Cheng et al, 2009; Cheng et al, 2010), decision/regression trees (Cheng et al, 2009; de Sá et al, 2017; Plaia & Sciandra, 2019), neural networks (Ribeiro et al, 2012), association rules (de Sá et al, 2011), probabilistic graphical models (Rodrigo et al, 2021), and transformation methods (Brinker & Hüllermeier, 2020; Cheng et al, 2013; Hüllermeier et al, 2008). However, current state‐of‐the‐art methods are those based on the ensemble technique and, in particular, ensembles of Label Ranking Trees, which have been proposed for standard ensemble techniques: bagging (Aledo et al, 2017; Suchithra & Pai, 2022), boosting (Dery & Shmueli, 2020) and random forest (de Sá et al, 2017; Zhou & Qiu, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Algorithms belonging to different machine learning paradigms (Zhou et al, 2014) have been proposed to tackle the LR problem: instance‐based learning (Cheng et al, 2009; Cheng et al, 2010), decision/regression trees (Cheng et al, 2009; de Sá et al, 2017; Plaia & Sciandra, 2019), neural networks (Ribeiro et al, 2012), association rules (de Sá et al, 2011), probabilistic graphical models (Rodrigo et al, 2021), and transformation methods (Brinker & Hüllermeier, 2020; Cheng et al, 2013; Hüllermeier et al, 2008). However, current state‐of‐the‐art methods are those based on the ensemble technique and, in particular, ensembles of Label Ranking Trees, which have been proposed for standard ensemble techniques: bagging (Aledo et al, 2017; Suchithra & Pai, 2022), boosting (Dery & Shmueli, 2020) and random forest (de Sá et al, 2017; Zhou & Qiu, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Several methods have been proposed to deal with the LR problem: Adaptation methods . The first group comprises the methods that adapt well‐known machine learning algorithms to cope with (possibly incomplete) rankings, such as decision tree induction , 4 instance‐based learning , 4 probabilistic graphical models , 7,8 association rules , 9 and neural networks 10 Transformation methods .…”
Section: Introductionmentioning
confidence: 99%
“…The difference between the two rankings is usually measured by the distance. Among the distance measures available in the literature, the Kendall tau distance (or the Kendall distance) [9] is the most widely used in several real-world applications centered on the analysis of ranked data [6], [10]- [12].…”
Section: Introductionmentioning
confidence: 99%