1990
DOI: 10.1109/34.58871
|View full text |Cite
|
Sign up to set email alerts
|

Neural network ensembles

Abstract: We propose several means for improving the performance and training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining residual "generalization" error can be reduced by invoking ensembles of similar networks.Zndex Terms-Crossvalidation, fault tolerant computing, neural networks, N-version programming.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
1,710
0
24

Year Published

1999
1999
2016
2016

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 3,626 publications
(1,820 citation statements)
references
References 15 publications
4
1,710
0
24
Order By: Relevance
“…Hansen and Salamon's work [3] shows that the generalisation ability of ANN increase through ensemble. Due to this ANN ensemble techniques have become very popular in ANN applications.…”
Section: H Neural Network Ensemblesmentioning
confidence: 99%
See 1 more Smart Citation
“…Hansen and Salamon's work [3] shows that the generalisation ability of ANN increase through ensemble. Due to this ANN ensemble techniques have become very popular in ANN applications.…”
Section: H Neural Network Ensemblesmentioning
confidence: 99%
“…A collection of a finite number of ANN trained for the same task is called an ensemble neural network. Hansen and Salamon [3] explains that the generalisation ability of an ANN can be significantly improved through an ENN and the advantages of separate ANN can be combined to give a better result. In an ensemble separate ANN are trained individually and then their outputs are combined.…”
mentioning
confidence: 99%
“…In ML research, the most common way to introduce diversity into an ensemble is to train the ensemble members on different subsets of the training data [2,12]. As stated above, this works very well for unstable learners such as neural networks but will not work for a stable learner such as a nearest neighbour classifier.…”
Section: Diversity In Ensemblesmentioning
confidence: 99%
“…It has been shown that the performance of an ensemble method to generalize the results of the training for the rest of datasets is a function of diversity and the accuracy of the individual machines applied in the ensemble [18], [19]. The diversity of the nets is provided through the strategy which was presented in the previous subsection.…”
Section: Filtering Less Accurate Netsmentioning
confidence: 99%