28th Annual Symposium on Foundations of Computer Science (Sfcs 1987) 1987
DOI: 10.1109/sfcs.1987.37
|View full text |Cite
|
Sign up to set email alerts
|

Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm

Abstract: Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
442
0
2

Year Published

1994
1994
2016
2016

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 277 publications
(447 citation statements)
references
References 12 publications
3
442
0
2
Order By: Relevance
“…The goal is to minimize the total cost of the selected sets 1 . In this paper, we describe a new randomized algorithm for the online multicover problem based on a randomized version of the winnowing approach of [15]. This algorithm generalizes and improves some earlier results in [1,2].…”
Section: Introductionmentioning
confidence: 70%
“…The goal is to minimize the total cost of the selected sets 1 . In this paper, we describe a new randomized algorithm for the online multicover problem based on a randomized version of the winnowing approach of [15]. This algorithm generalizes and improves some earlier results in [1,2].…”
Section: Introductionmentioning
confidence: 70%
“…To evaluate the effectiveness of FeatureMine, we used the feature set it produces as input to two standard classification algorithms: Winnow (Littlestone, 1988) and Naive Bayes (Duda and Hart, 1973). We ran experiments on three datasets described below.…”
Section: Methodsmentioning
confidence: 99%
“…It is well known that in a variety of passive learning models, such as Valiant's PAC model, [58], and the mistake bound models of Littlestone and Haussler et al, [47,38], it is intractable to learn or even approximate classical automata, [34,4,56]. However, the problem becomes tractable when the learner is allowed to make membership and equivalence queries, as in the active model of learning introduced by Angluin, [4,5].…”
Section: Applications In Computational Learning Theorymentioning
confidence: 99%