2017
DOI: 10.32614/rj-2017-012
|View full text |Cite
|
Sign up to set email alerts
|

Multilabel Classification with R Package mlr

Abstract: We implemented several multilabel classification algorithms in the machine learning package mlr. The implemented methods are binary relevance, classifier chains, nested stacking, dependent binary relevance and stacking, which can be used with any base learner that is accessible in mlr. Moreover, there is access to the multilabel classification versions of randomForestSRC and rFerns. All these methods can be easily compared by different implemented multilabel performance measures and resampling methods in the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
35
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(35 citation statements)
references
References 27 publications
(47 reference statements)
0
35
0
Order By: Relevance
“…We compare the hybrid label-based meta-learning with generalized linear mixed model (HybridLBGLM) approach with the MLKNN [23], binary relevance method (BR) [15], [35], multi-label ferns (Ferns) [36] and randomForestSRC [37]. MLKNN is the arguably the state-of-the-art in instance-based multi-label ranking.…”
Section: A Evaluation Measurementsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare the hybrid label-based meta-learning with generalized linear mixed model (HybridLBGLM) approach with the MLKNN [23], binary relevance method (BR) [15], [35], multi-label ferns (Ferns) [36] and randomForestSRC [37]. MLKNN is the arguably the state-of-the-art in instance-based multi-label ranking.…”
Section: A Evaluation Measurementsmentioning
confidence: 99%
“…Binary relevance method, which is the simplest problem transformation method, learns a binary classifier for each label and then combined all binary classifiers to a multi-label target. We used mlr package [35] to run BR, Ferns and randomForestSRC. Table 3 and 4 summarize the testing results of our experiments.…”
Section: A Evaluation Measurementsmentioning
confidence: 99%
“…Some works use the mlr package, which was not specifically designed for MLC. As a result, it provides only a few multi-label strategies (Probst et al, 2017) and does not support the MLC ARFF format. In fact, it is a general purpose package, with an interface to more than one hundred algorithms that supports several ML tasks (Bischl et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Although it does not contain MLC strategies, it supports the ARFF variation for MLC data, largely used for data mining and machine learning (ML) experiments, and has useful features, such as dataset characterization, MLC evaluation measures, and a rich user interface for the data exploration.Some works use the mlr package, which was not specifically designed for MLC. As a result, it provides only a few multi-label strategies (Probst et al, 2017) and does not support the MLC ARFF format. In fact, it is a general purpose package, with an interface to more than one hundred algorithms that supports several ML tasks (Bischl et al, 2016).…”
mentioning
confidence: 99%
“…We used the random forest machine learning method Qi, 2012) to predict parity reached in dairy cattle at birth and at first calving. The data was analyzed in the statistical program R (Team, 2016), using the packages 'mlr' Probst et al, 2017) to implement 'randomForest' . Random forest was applied by first tuning on the training data, which was the data from 2000 to 2006, and then validating on the 2007 data.…”
Section: Discussionmentioning
confidence: 99%