2019 4th International Conference on Computer Science and Engineering (UBMK) 2019
DOI: 10.1109/ubmk.2019.8907028
|View full text |Cite
|
Sign up to set email alerts
|

A Weighted Majority Voting Ensemble Approach for Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(38 citation statements)
references
References 13 publications
0
38
0
Order By: Relevance
“…The experiment was conducted using a tool known as RM, which is a common software developed for data science and machine learning tasks [33]. This tool is widely selected and used by researchers around the world based on its range of capabilities and easy application [65][66][67]. In the previous section, the description and details of the selected dataset were presented.…”
Section: Model Implementation Using Rmmentioning
confidence: 99%
“…The experiment was conducted using a tool known as RM, which is a common software developed for data science and machine learning tasks [33]. This tool is widely selected and used by researchers around the world based on its range of capabilities and easy application [65][66][67]. In the previous section, the description and details of the selected dataset were presented.…”
Section: Model Implementation Using Rmmentioning
confidence: 99%
“…How to accurately allocate large weight to the base classifier with large contribution is also a key problem. Alican Dogan [17] proposed a new weight allocation method from the perspective of weight allocation. Different from the existing literature, this method only provides the reward mechanism.…”
Section: Related Workmentioning
confidence: 99%
“…each base classifier on the training set, runs the trained classifier on the validation set, and obtains different weights [17], i.e. 1 ,..., ,...,…”
mentioning
confidence: 99%
“…There are also some variations of these methods. For instance, Dogan and Birant [6] proposed initializing all the weights to the same value, and reward the best performing trees on the validation set formed by the out-of-bag samples. Zhukov et al [7] added a pruning step to replace the worst decision tree so the ensemble can handle concept drift.…”
Section: Related Workmentioning
confidence: 99%