1997
DOI: 10.1088/0954-898x/8/3/004
|View full text |Cite
|
Sign up to set email alerts
|

Optimal ensemble averaging of neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

1998
1998
2018
2018

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 69 publications
(18 citation statements)
references
References 8 publications
0
17
0
Order By: Relevance
“…A more targeted approach to aggregation might yield superior results, and a different aggregation approach might be best served with a different training methodology. For example, over-trained networks have been shown to perform better in large ensembles than under-trained networks (Naftaly et al, 1997;Granitto et al, 2005).…”
Section: Discussionmentioning
confidence: 99%
“…A more targeted approach to aggregation might yield superior results, and a different aggregation approach might be best served with a different training methodology. For example, over-trained networks have been shown to perform better in large ensembles than under-trained networks (Naftaly et al, 1997;Granitto et al, 2005).…”
Section: Discussionmentioning
confidence: 99%
“…The proposed model is trained by using three different ANN networks such as cascade-forward back propagation network (NEWCF), feed-forward input time-delay back propagation network (NEWFFTD), and fitting network (NEWFIT) ,each network is trained by using ensemble methods under different combination of groups by applying two methods which are the averaging and voting methods [8].…”
Section: Methodsmentioning
confidence: 99%
“…So we should not expect a reduction in the bias term compared to single models. According to Naftaly et al [27], the variance term of the ensemble could be decomposed in the following way:…”
Section: Theory and Methodsmentioning
confidence: 99%