1994
DOI: 10.1007/bf00993468
|View full text |Cite
|
Sign up to set email alerts
|

Toward efficient agnostic learning

Abstract: Abstract. In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
65
0

Year Published

1998
1998
2012
2012

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 142 publications
(66 citation statements)
references
References 23 publications
0
65
0
Order By: Relevance
“…When an algorithm is run on a sequence of feature vectors and rewards satisfying these assumptions, we call it an η-admissible run of the algorithm. Following [23], we refer to this as the agnostic case.…”
Section: The Agnostic Casementioning
confidence: 99%
“…When an algorithm is run on a sequence of feature vectors and rewards satisfying these assumptions, we call it an η-admissible run of the algorithm. Following [23], we refer to this as the agnostic case.…”
Section: The Agnostic Casementioning
confidence: 99%
“…The agnostic setting It is well-known that a hypothesis class H is PAC-learnable in the (so-called) agnostic setting (with no a-priori assumptions about the data) iff the corresponding Minimum Disagreement Problem for H can be solved in polynomial time [15]. The completely analogous remark holds for "Minimum Onesided Disagreement" and "Agnostic PAC-learning with One-sided Empirical Error".…”
Section: Some Closing Remarksmentioning
confidence: 74%
“…An O(n 6 k)-time algorithm for finding one polygon in the class of convex s-gons that minimizes the classification error on a sample of labeled points is given [31]. This result implies that convex polygons are learnable in the PAC model with random classification noise [32] and in the "agnostic" PAC model [33].…”
Section: Learning In Constant Dimension For Constant Dimension Baummentioning
confidence: 98%