1994
DOI: 10.1006/inco.1994.1009
|View full text |Cite
|
Sign up to set email alerts
|

The Weighted Majority Algorithm

Abstract: We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is t o make few mistakes. We are interested in the case that the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We cal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
422
0
2

Year Published

1994
1994
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,347 publications
(446 citation statements)
references
References 1 publication
3
422
0
2
Order By: Relevance
“…We use the tightest available PAC-learning bounds, due to Anthony, Biggs and Shawe-Taylor (1990), to argue that with high probability, a hypothesis consistent with the subsample can't be too bad on the whole sample. Littlestone and Warmuth (1989) describe a variant of the weighted majority Mgorithm where the weights are kept above some lower limit. This allows the weighted majority algorithm to recover and adapt to changes in the target.…”
Section: Introductionmentioning
confidence: 99%
“…We use the tightest available PAC-learning bounds, due to Anthony, Biggs and Shawe-Taylor (1990), to argue that with high probability, a hypothesis consistent with the subsample can't be too bad on the whole sample. Littlestone and Warmuth (1989) describe a variant of the weighted majority Mgorithm where the weights are kept above some lower limit. This allows the weighted majority algorithm to recover and adapt to changes in the target.…”
Section: Introductionmentioning
confidence: 99%
“…The training error ε i indicates the fitting degree of Classifier C i to the training data. This estimation has been validated in [62]. For easier interpretation, we can normalize these weights so that they sum up to 1; however, normalization does not change the outcome of weighted majority voting.…”
Section: Weighted-majority Votingmentioning
confidence: 93%
“…Weighted-majority voting (WMV) is a meta-learning reasoning procedure [62]. The first type of majority voting refers to the decision when all experts agree on the same output (unanimous voting).…”
Section: Weighted-majority Votingmentioning
confidence: 99%
“…Pool of models for the ensemble method! Figure 1 presents the details of the proposed ensemble method, which implements a modified version of the weighted majority algorithm (WMA) [11]. The modified WMA returns a KPI degradation level in the range [0,1] and uses context information for updating the weights and creating new models.…”
Section: M3!mentioning
confidence: 99%