1991
DOI: 10.1162/neco.1991.3.1.79
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Mixtures of Local Experts

Abstract: We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
1,841
0
41

Year Published

1998
1998
2016
2016

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 3,460 publications
(1,968 citation statements)
references
References 1 publication
3
1,841
0
41
Order By: Relevance
“…S1 in Supplementary Material). This architecture was constructed in a simulation study of object manipulation (Gomi and Kawato 1993) based on a mixture-of-experts model (Jacobs et al 1991). The mixture-of-experts model involves expert modules, which are equivalent to internal inverse models, and a gating module.…”
Section: Simulationsmentioning
confidence: 99%
“…S1 in Supplementary Material). This architecture was constructed in a simulation study of object manipulation (Gomi and Kawato 1993) based on a mixture-of-experts model (Jacobs et al 1991). The mixture-of-experts model involves expert modules, which are equivalent to internal inverse models, and a gating module.…”
Section: Simulationsmentioning
confidence: 99%
“…Mixture of experts (Jacobs et al, 1991;Jordan and Jacobs, 1994) are used in a variety of contexts including regression, classification and clustering. Here we consider the MoE framework for fitting (non-linear) regression functions and clustering of univariate continuous data .…”
Section: Mixture Of Experts For Continuous Datamentioning
confidence: 99%
“…Mixture of experts (MoE) introduced by Jacobs et al (1991) are widely studied in statistics and machine learning. They consist in a fully conditional mixture model where both the mixing proportions, known as the gating functions, and the component densities, known as the experts, are conditional on some input covariates.…”
Section: Introductionmentioning
confidence: 99%
“…The algorithms for training the mixture of autoencoders are related to a formulation of the adaptive mixture model (Jacobs & Jordan, 1993;Jacobs, Jordan, Nowlan & Hinton, 1991;Jordan & Jacobs, 1994). The adaptive mixture model can be regarded as a supervised version of the mixture of autoencoders model.…”
Section: Unsupervised Training Based On Maximum Likelihood Estimationmentioning
confidence: 99%