Proceedings of IEEE Workshop on Neural Networks for Signal Processing
DOI: 10.1109/nnsp.1994.366050
|View full text |Cite
|
Sign up to set email alerts
|

Classification using hierarchical mixtures of experts

Abstract: There has recently been widespread interest in the use of multiple models for classification and regression in the statistics and neural networks communities. The Hierarchical Mixture of Experts (HME) [1] has been successful in a number of regression problems, yielding significantly faster training through the use of the Expectation Maximisation algorithm. In this paper we extend the HME to classification and results are reported for three common classification benchmark tests: Exclusive-Or, N-input Parity and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
79
0

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 84 publications
(83 citation statements)
references
References 7 publications
0
79
0
Order By: Relevance
“…AdaBoost's idea of focusing a classifier on hard training examples, Mixtures-of-Experts goes one step further by training individual classifiers, also called local experts, at different data partitions and combining the results from multiple classifiers in a trainable way [45,46]. In Mixtures-of-Experts, a gating network is employed to assign local experts to different data partitions and the local experts, which can be various classifiers, take the input data and make predictions.…”
Section: Mixtures-of-experts-similar Tomentioning
confidence: 99%
“…AdaBoost's idea of focusing a classifier on hard training examples, Mixtures-of-Experts goes one step further by training individual classifiers, also called local experts, at different data partitions and combining the results from multiple classifiers in a trainable way [45,46]. In Mixtures-of-Experts, a gating network is employed to assign local experts to different data partitions and the local experts, which can be various classifiers, take the input data and make predictions.…”
Section: Mixtures-of-experts-similar Tomentioning
confidence: 99%
“…When the HME was used in time-series prediction tasks [30,37], results that were better than leading methods, including those using MLP networks [38], can be obtained. Signi®cant improvement of an existing¯ow control scheme [29] has also been achieved by the authors, using an HMEbased predictor [31].…”
Section: Hierarchical Mixtures Of Expertsmentioning
confidence: 99%
“…By then considering only the most probable path through Table 1 Application of MNN's in Pattern Recognition Theory the tree, they pruned branches away, either temporarily or permanently in case of redundancy [138]. This improved HME showed significant speedups and more efficient use of parameters over the standard fixed HME structure in discriminating for artificial applications as well as prediction of parameterized speech over short time segments [137]. The HME architecture has also been applied to text-dependent speaker identification [21].…”
Section: Hierarchical Mixture Of Expertsmentioning
confidence: 99%