Proceedings of the Tenth Annual Conference on Computational Learning Theory - COLT '97 1997
DOI: 10.1145/267460.267493
|View full text |Cite
|
Sign up to set email alerts
|

General convergence results for linear discriminant updates

Abstract: Abstract. The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of "quasi-additive" algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
144
0
1

Year Published

1999
1999
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 69 publications
(148 citation statements)
references
References 23 publications
(13 reference statements)
3
144
0
1
Order By: Relevance
“…In work parallel to this, the general additive update (9) in the context of linear classification, i.e., with a thresholded transfer function, has recently been developed and analyzed by Grove, Littlestone, and Schuurmans (1997) with methods and results very similar to ours; see also Gentile and Littlestone (1999). Gentile and Warmuth (1999) have shown how the notion of matching loss can be generalized to thresholded transfer functions.…”
Section: Introductionmentioning
confidence: 78%
See 1 more Smart Citation
“…In work parallel to this, the general additive update (9) in the context of linear classification, i.e., with a thresholded transfer function, has recently been developed and analyzed by Grove, Littlestone, and Schuurmans (1997) with methods and results very similar to ours; see also Gentile and Littlestone (1999). Gentile and Warmuth (1999) have shown how the notion of matching loss can be generalized to thresholded transfer functions.…”
Section: Introductionmentioning
confidence: 78%
“…The normalized Winnow algorithm (Littlestone, 1989) is analogous to the exponentiated gradient algorithm: the loss bound depends on the product of the ∞-norm of the instances and the 1-norm of the correct weight vector. Grove, Littlestone, and Schuurmans (1997) have generalized this by giving for each p > 2 a classification algorithm that has a loss bound in terms of on the p-norm of the instances and the q-norm of the correct weight vector, where 1/ p + 1/q = 1. This result for general norm pairs has also been extended to linear regression (Gentile & Littlestone, 1999).…”
Section: Let Us Define Lossmentioning
confidence: 99%
“…Online learning of linear classifiers is an important and well-studied domain in machine learning with interesting theoretical properties and practical applications (Cesa-Bianchi et al 2002;Crammer et al 2005;Gentile 2001Grove et al 2001;Helmbold et al 1999;Kivinen et al 2002;Kivinen and Warmuth 1997;Li and Long 2002). An online learning algorithm observes instances in a sequence of trials.…”
Section: Introductionmentioning
confidence: 99%
“…The flurry of online learning algorithms sparked unified analyses of seemingly different online algorithms by Littlestone, Warmuth, Kivinen and colleagues (Kivinen and Warmuth 1997;Littlestone 1988). Most notably is the work of Grove, Littlestone, and Schuurmans (Grove et al 2001) on a quasi-additive family of algorithms, which includes both the Perceptron (Rosenblatt 1958) and the Winnow (Littlestone 1988) algorithms as special cases. A similar unified view for regression was derived by Warmuth (1997, 2001).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation