2019 Innovations in Intelligent Systems and Applications Conference (ASYU) 2019
DOI: 10.1109/asyu48272.2019.8946373
|View full text |Cite
|
Sign up to set email alerts
|

Weighted Voting Based Ensemble Classification with Hyper-parameter Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…ML's application extends to improving plant agronomic traits through the integration of large omics data (Isewon et al, 2022). Studies by Farooq et al (2022), Isewon et al (2022), andSilva et al (2019) highlight the superiority of ML methods, particularly decision tree-based ensemble models (Gokalp and Tasci, 2019), in genomic prediction and integrative analysis of plant omics data. ML's potential in deciphering complex interactions in plant molecular biology, including pathogen effector genes and plant immunity, is underscored (Silva et al, 2019).…”
Section: Validation Strategiesmentioning
confidence: 99%
“…ML's application extends to improving plant agronomic traits through the integration of large omics data (Isewon et al, 2022). Studies by Farooq et al (2022), Isewon et al (2022), andSilva et al (2019) highlight the superiority of ML methods, particularly decision tree-based ensemble models (Gokalp and Tasci, 2019), in genomic prediction and integrative analysis of plant omics data. ML's potential in deciphering complex interactions in plant molecular biology, including pathogen effector genes and plant immunity, is underscored (Silva et al, 2019).…”
Section: Validation Strategiesmentioning
confidence: 99%
“…Multiple learning algorithms make the classification model more robust. Methods like this increase efficiency but can be biased as performance heavily depends on weights in a weighted voting (Gokalp and Tasci 2019 ). These can involve a combination of any of the classifiers described above.…”
Section: Traditional Classifiersmentioning
confidence: 99%
“…It has been observed over time that ensemble methods, if not properly checked, might not ensure that the best-performing set of weights are used as a final model. So, our proposed method performed weighted average ensemble [8] as one of the ways of achieving a model ensemble in neural networks like voting [9] and stacking [10] and snapshot or checkpoint [11], among others, in a unique way. Here, instead of allowing equal contribution of all the models to the final prediction model, contributions were dependent on the level of trust and estimated performance, to ensure the performance of poorly performed models do not affect the overall forecast result.…”
Section: The Network Architecturementioning
confidence: 99%