2018 International Conference on Content-Based Multimedia Indexing (CBMI) 2018
DOI: 10.1109/cbmi.2018.8516453
|View full text |Cite
|
Sign up to set email alerts
|

Coupled Ensembles of Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…It should be mentioned that an ensemble does not have to increase the cost of the computation time (training a CNN multiple times) [71]. Instead, the spirit of the ensemble can be realized at different levels, for instance, by multicolumn and multibranch architectures, where the training only needs to be carried out once [35], [72], [73]. Even with a single-branch architecture, an ensemble can be performed with a special training strategy that passes a range of local minima during the training process [74].…”
Section: Discussionmentioning
confidence: 99%
“…It should be mentioned that an ensemble does not have to increase the cost of the computation time (training a CNN multiple times) [71]. Instead, the spirit of the ensemble can be realized at different levels, for instance, by multicolumn and multibranch architectures, where the training only needs to be carried out once [35], [72], [73]. Even with a single-branch architecture, an ensemble can be performed with a special training strategy that passes a range of local minima during the training process [74].…”
Section: Discussionmentioning
confidence: 99%
“…It uses three networks to learn robust facial expression features in the presence of noisy annotations. Inspired by [20,21,22], we use three networks with identical architecture, but different initialization, trained jointly using a convex combination of supervision loss and consistency loss. Different initialization promote different learning paths for the networks, though they have same architecture.…”
Section: Overviewmentioning
confidence: 99%
“…Ensembles of neural networks are widely successful for improving both the accuracy and predictive uncertainty of very different predictive tasks, ranging from supervised learning [14] to few-shot learning [11]. Recent studies empirically show that deep ensembles can outperform single models with equivalent computational budgets on image classification tasks [10], [9], [13], [40]. We have in common with these works the outcome, i.e.…”
Section: Related Workmentioning
confidence: 99%
“…Indeed, [13] suggested the possibility of obtaining better performance with ensembles but without performing an extensive empirical evaluation with state-of-the-art architectures. [10] mainly focused their attention on the possible strategies to combine the predictions of the ensemble members. Differently from [9], we compare neural ensembles with single networks not only of the same depth but also of the same width and greater depth.…”
Section: Related Workmentioning
confidence: 99%