“…This algorithm, called Winnow, has the advantage of reducing the weights of irrelevant input features quickly toward zero, making the algorithm more effective when there are a large number of features but only few of them are important to the classification task. A number of variations of the Winnow algorithm have since been studied, both in terms of provable error bounds (Littlestone & Warmuth, 1994;Kivinen & Warmuth, 1997;Crammer & Singer, 2001;Mesterharm, 2002) and empirical performance on natural language processing tasks such as document categorization (Dagan, Karov, & Roth, 1997) and spelling correction (Golding & Roth, 1999). However, the error rates for both the additive and multiplicative algorithms are significantly higher when feedback is limited, especially for the important case of k = 1 (simple confirmation), as illustrated by the empirical results presented in this paper.…”