2001
DOI: 10.1007/3-540-44581-1_7
|View full text |Cite
|
Sign up to set email alerts
|

Ultraconservative Online Algorithms for Multiclass Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
326
0

Year Published

2005
2005
2015
2015

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 220 publications
(327 citation statements)
references
References 7 publications
1
326
0
Order By: Relevance
“…5, we show the expected relative improvement as a function of the strength of the base classifiers. More specifically, each position t ∈ [1,15] on the horizontal axis shows the result when the initially generated α ij -values are multiplied with t. The plots are representative for all the initial values that we have generated; in total we generated 200 curves and at least 95% of these curves lie in-between the plotted 20 curves. We may formulate the following three observations.…”
Section: Experimental Setup and Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…5, we show the expected relative improvement as a function of the strength of the base classifiers. More specifically, each position t ∈ [1,15] on the horizontal axis shows the result when the initially generated α ij -values are multiplied with t. The plots are representative for all the initial values that we have generated; in total we generated 200 curves and at least 95% of these curves lie in-between the plotted 20 curves. We may formulate the following three observations.…”
Section: Experimental Setup and Resultsmentioning
confidence: 99%
“…The setting of label ranking can be seen as an extension of the conventional setting of classification [13][14][15]11]. Roughly speaking, the former is obtained from the latter through replacing single class labels by complete label rankings.…”
Section: Generalizing the Classification Settingmentioning
confidence: 99%
See 2 more Smart Citations
“…This algorithm, called Winnow, has the advantage of reducing the weights of irrelevant input features quickly toward zero, making the algorithm more effective when there are a large number of features but only few of them are important to the classification task. A number of variations of the Winnow algorithm have since been studied, both in terms of provable error bounds (Littlestone & Warmuth, 1994;Kivinen & Warmuth, 1997;Crammer & Singer, 2001;Mesterharm, 2002) and empirical performance on natural language processing tasks such as document categorization (Dagan, Karov, & Roth, 1997) and spelling correction (Golding & Roth, 1999). However, the error rates for both the additive and multiplicative algorithms are significantly higher when feedback is limited, especially for the important case of k = 1 (simple confirmation), as illustrated by the empirical results presented in this paper.…”
Section: Introductionmentioning
confidence: 99%