2021
DOI: 10.1007/978-981-16-1249-7_4
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Deep Neural Network Ensembles by Majority Voting Cum Meta-Learning Scheme

Abstract: Deep Neural Networks (DNNs) are prone to overfitting and hence have high variance. Overfitted networks do not perform well for a new data instance. So instead of using a single DNN as classifier we propose an ensemble of seven independent DNN learners by varying only the input to these DNNs keeping their architecture and intrinsic properties same. To induce variety in the training input, for each of the seven DNNs, one-seventh of the data is deleted and replenished by bootstrap sampling from the remaining samp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 17 publications
(15 reference statements)
0
2
0
Order By: Relevance
“…Due to the lack of similar CSE recognition systems, we utilized a baseline system based on the majority voting [91] method, in which the predictions of all the classifiers are combined and the class label with the highest frequency is selected as the final prediction. By comparing the results of the two models, we aim to show the benefits of using a more sophisticated late fusion model over a simple majority voting ensemble model for CSE attack recognition.…”
Section: Cse-ars Evaluationmentioning
confidence: 99%
“…Due to the lack of similar CSE recognition systems, we utilized a baseline system based on the majority voting [91] method, in which the predictions of all the classifiers are combined and the class label with the highest frequency is selected as the final prediction. By comparing the results of the two models, we aim to show the benefits of using a more sophisticated late fusion model over a simple majority voting ensemble model for CSE attack recognition.…”
Section: Cse-ars Evaluationmentioning
confidence: 99%
“…In traditional ensemble learning, sometimes an objective function needs to be approximated by combining multiple models. Generally, the results of each model are combined by voting (majority wins), weighted voting (some classifiers are more authoritative than others), and averaging the results [40].…”
Section: Proposed Stacking Algorithm Based On Gcn (Sa-gcn)mentioning
confidence: 99%
“…Previous ensemble learning studies have shown that the combination of multiple networks can learn more effective information from images due to a single network. Good results have been obtained for integration learning in many areas of research, such as [51][52][53]. One of the feature combinations for emotion recognition in [46] is fc5 VGG13 + fc7 VGG16 + pool ResNet, which has the disadvantage of the network and parameter size being too large and not easy to train.…”
Section: Implementation Detailsmentioning
confidence: 99%