2018
DOI: 10.1080/02664763.2018.1441383
|View full text |Cite
|
Sign up to set email alerts
|

The relative performance of ensemble methods with deep convolutional neural networks for image classification

Abstract: Artificial neural networks have been successfully applied to a variety of machine learning tasks, including image recognition, semantic segmentation, and machine translation. However, few studies fully investigated ensembles of artificial neural networks. In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as candidate algorithm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
205
1
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 285 publications
(220 citation statements)
references
References 47 publications
4
205
1
1
Order By: Relevance
“…This design strategy of using independently-optimized diffractive networks is in fact similar to ensemble methods [29], [30] that are frequently used in machine learning literature. Figure 6 summarizes the blind testing accuracies achieved by this strategy using either non-differential or differential diffractive neural networks, i.e., D( [10,0], [1,5,40k]) or D( [10,10], [1,5,40k]).…”
Section: Resultsmentioning
confidence: 98%
“…This design strategy of using independently-optimized diffractive networks is in fact similar to ensemble methods [29], [30] that are frequently used in machine learning literature. Figure 6 summarizes the blind testing accuracies achieved by this strategy using either non-differential or differential diffractive neural networks, i.e., D( [10,0], [1,5,40k]) or D( [10,10], [1,5,40k]).…”
Section: Resultsmentioning
confidence: 98%
“…Well-known ensemble techniques include boosting, bagging and stacking. Stacking combines the outputs of a set of base learners and lets another algorithm, referred to as the meta-learner, make the final predictions [14]. A super learner is another method that calculates the final predictions by finding the optimal weights of the base learners by minimizing a loss function based on the cross-validated output of the learners [14].…”
Section: Ensemble Methodsmentioning
confidence: 99%
“…The most common ensemble method used for neural networks is average voting that generates posterior labels by calculating the average of the softmax class probabilities or predicted labels for all the base learners [14]. Majority voting is another ensemble approach that counts the predicted labels from all the base learners and reports the label with the maximum number of votes as the final prediction.…”
Section: Ensemble Methodsmentioning
confidence: 99%
“…Other ensemble methods like Super Learner (Ju et al, 2017) should be tried as well. Since label sparsity is at the heart of the performance difference between ChatScript and the CNN models, a more direct way to deal with lack of training examples (possibly obviating the need for a hand-crafted system like ChatScript) could be to automatically generate paraphrases to augment available data, potentially with a content author in the loop; we are currently exploring strategies for doing so.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Previous research has shown that ensembling models improves performance (Sakaguchi et al, 2015b;Ju et al, 2017;He et al, 2017). We train different models with different splits of training and develop data, and ensemble them together.…”
Section: Ensemble Methodsmentioning
confidence: 99%