1988
DOI: 10.1007/bf00116827
|View full text |Cite
|
Sign up to set email alerts
|

Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm

Abstract: Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable cl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

7
650
0
9

Year Published

1997
1997
2015
2015

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 661 publications
(666 citation statements)
references
References 15 publications
7
650
0
9
Order By: Relevance
“…developed efficient algorithms for exact learning boolean threshold functions, 2-term RSE, and 2-term DNF in the RWOnline model. Those classes are already known to be learnable in the Online model [L87,FS92], but the algorithms in [BFH95] achieve a better mistake bound (for threshold functions).…”
Section: Previous Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…developed efficient algorithms for exact learning boolean threshold functions, 2-term RSE, and 2-term DNF in the RWOnline model. Those classes are already known to be learnable in the Online model [L87,FS92], but the algorithms in [BFH95] achieve a better mistake bound (for threshold functions).…”
Section: Previous Resultsmentioning
confidence: 99%
“…developed efficient algorithms for exact learning boolean threshold functions, 2-term Ring-Sum Expansion (parity of 2 monotone terms) and 2-term DNF in the RWOnline model. Those classes are already known to be learnable in the Online model [L87,FS92] (and therefore in the RWOnline model), but the algorithm in [BFH95] for boolean threshold functions achieves a better mistake bound. They show that this class can be learned by making no more than n + 1 mistakes in the RWOnline model, improving on the O(n log n) bound for the Online model proven by Littlestone in [L87].…”
Section: Rwonline Versus Onlinementioning
confidence: 99%
“…This parameter allows the Balanced Winnow to be further adjusted to fit the data optimally. The three parameters cause the Balanced Winnow algorithm to update the models and accurately determine the class of each instance [8,9]. The Balanced Winnow is based on two functions.…”
Section: Balanced Winnow Neural Networkmentioning
confidence: 99%
“…The mistake-bound model of learning, introduced by Littlestone (Littlestone, 1988;1989), has attracted a considerable amount of attention (e.g., (Littleston, 1988, Littlestone & Warmuth, 1994, Blum 1994, Blum 1992a, Maass, 1991, Chen & Maass, 1994, Helmbold, Littlestone & Long, 1992, Goldman, Rivest & Schapire, 1993, Goldman & Sloan, 1994) and is recognized as one of the central models of computational learning theory. Basically it models a process of incremental learning, where the learner discovers the 'labels' of instances one by one.…”
Section: Introductionmentioning
confidence: 99%
“…We present an off-line variant of the mistake-bound model of learning. This is an intermediate model between the on-line learning model (Littlestone, 1988) and the self-directed learning model (Goldman, Rivest & Schapire, 1993, Goldman & Sloan, 1994. Just like in the other two models, a learner in the off-line model has to learn an unknown concept from a sequence of elements of the instance space on which it makes "guess and test" trials.…”
mentioning
confidence: 99%