Proceedings of the Fifth Annual Workshop on Computational Learning Theory 1992
DOI: 10.1145/130385.130424
|View full text |Cite
|
Sign up to set email alerts
|

Toward efficient agnostic learning

Abstract: Abstract. In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
191
0

Year Published

1995
1995
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 226 publications
(191 citation statements)
references
References 19 publications
(14 reference statements)
0
191
0
Order By: Relevance
“…2. This is in contrast to the case of probabilistic models of learning, where efficient algorithms with good learning performance have been discovered for this function class (Kearns, Schapire & Sellie, 1992).…”
Section: Acknowledgmentmentioning
confidence: 88%
“…2. This is in contrast to the case of probabilistic models of learning, where efficient algorithms with good learning performance have been discovered for this function class (Kearns, Schapire & Sellie, 1992).…”
Section: Acknowledgmentmentioning
confidence: 88%
“…This is a different task than agnostic learning [11], which requires the learner to find a concept that best approximates the probabilistic (noisy) concept observed. An extra difficulty of our task arises from the fact that the noise process may generate two similar probabilistic concepts from two dissimilar concepts.…”
Section: Class Noise Modelsmentioning
confidence: 99%
“…A well established approach in such cases (Vovk, 1990;Littlestone & Warmuth, 1994;Littlestone, 1989;Feder, Merhav, & Gutman, 1992;Merhav & Feder, 1993;Cesa-Bianchi et al, 1997;Cesa-Bianchi et al, 1996) is to assume nothing about the (x t , y t ) pairs, and instead, for a given F, to give bounds on the number of mistakes made by a given learning algorithm in terms of the minimum over f ∈ F of the number η of trials t for which f (x t ) = y t . Learning models like this are often referred to as agnostic learning models 4 (Kearns, Schapire, & Sellie, 1994).…”
Section: Agnostic Learningmentioning
confidence: 99%