2019
DOI: 10.1016/j.eswa.2019.04.051
|View full text |Cite
|
Sign up to set email alerts
|

Advancing Ensemble Learning Performance through data transformation and classifiers fusion in granular computing context

Abstract: Classification is a special type of machine learning tasks, which is essentially achieved by training a classifier that can be used to classify new instances. In order to train a high performance classifier, it is crucial to extract representative features from raw data, such as text and images. In reality, instances could be highly diverse even if they belong to the same class, which indicates different instances of the same class could represent very different characteristics. For example, in a facial expres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 48 publications
0
11
0
1
Order By: Relevance
“…It should consider not only the quality of local algorithm results on each scale but also the consistency of results between different analysis hierarchies. According to the granular computing theory [ 28 , 29 ], the knowledge granularity could clearly measure the amount of information among different analysis hierarchies, which helps evaluate the scale transformation process [ 30 , 31 ].…”
Section: Literature Reviewmentioning
confidence: 99%
“…It should consider not only the quality of local algorithm results on each scale but also the consistency of results between different analysis hierarchies. According to the granular computing theory [ 28 , 29 ], the knowledge granularity could clearly measure the amount of information among different analysis hierarchies, which helps evaluate the scale transformation process [ 30 , 31 ].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Generally, its behaviour majorly depends upon the features of various suggested areas. Therefore, the performance evaluation of ensemble and ML models for votebased assessment is significantly desired, although many assessment assignments have been agreed out by researchers, such as [ 6,7,8]. There are a wide variety of methods to build ensembles.…”
Section: Related Workmentioning
confidence: 99%
“…Bagging and boosting are classical ensemble learning methods. However, bagging or boosting is based on the same classification algorithm, focusing on the diversity of the data samples, and is short of diversity creation through different algorithms [8]. In addition, feature extraction based on various classifiers is more significant in NLP [23].…”
Section: Aclassifier Fusionmentioning
confidence: 99%
“…The underlying idea of ensemble learning is that even if one weak classifier gets the wrong prediction, other classifiers can correct the error back to some extent. The two most common ways of ensemble learning are bagging and boosting [8]. However, these two methods are unsuitable for ensemble learning between different classifiers.…”
Section: Introductionmentioning
confidence: 99%