1991
DOI: 10.1016/0304-3975(91)90026-x
|View full text |Cite
|
Sign up to set email alerts
|

Learnability with respect to fixed distributions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
89
0

Year Published

1991
1991
2005
2005

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 104 publications
(91 citation statements)
references
References 8 publications
2
89
0
Order By: Relevance
“…The pair (C, H) is said to be passively learnable with respect to 1P if there exists a function f E FCH such that for every E, 6 > 0 there is a 0 < m(e, 6) < oo such that for every probability measure P E P7 and every c E C, if 7 E X m is chosen at random according to p m then the probability that errorf,c,p(T) < e is greater than 1 -6. If P is the set of all probability distributions over some fixed a-algebra of X (which we will denote by p*), then the above definition reduces to the version from Blumer et al [6] of Valiant's [20] original definition (without restrictions on computability) for learnability for all distributions. If P consists of a single distribution then the above definition reduces to that used by Benedek and Itai [5]. As often done in the literature, we will be considering the case H = C throughout, so that we will simply speak of learnability of C rather than learnability of (C, H).…”
Section: Definitions Of Passive and Active Learnabilitymentioning
confidence: 99%
See 3 more Smart Citations
“…The pair (C, H) is said to be passively learnable with respect to 1P if there exists a function f E FCH such that for every E, 6 > 0 there is a 0 < m(e, 6) < oo such that for every probability measure P E P7 and every c E C, if 7 E X m is chosen at random according to p m then the probability that errorf,c,p(T) < e is greater than 1 -6. If P is the set of all probability distributions over some fixed a-algebra of X (which we will denote by p*), then the above definition reduces to the version from Blumer et al [6] of Valiant's [20] original definition (without restrictions on computability) for learnability for all distributions. If P consists of a single distribution then the above definition reduces to that used by Benedek and Itai [5]. As often done in the literature, we will be considering the case H = C throughout, so that we will simply speak of learnability of C rather than learnability of (C, H).…”
Section: Definitions Of Passive and Active Learnabilitymentioning
confidence: 99%
“…That is, P consists of a single distribution P that is known to the learner. Benedek and Itai [5] obtained conditions for passive learnability in this case in terms of a quantity known as metric entropy.…”
Section: Active Learning For a Fixed Distributionmentioning
confidence: 99%
See 2 more Smart Citations
“…Some variations/extensions of the original model that have been studied and are relevant to the present work include learning with respect to a fixed distribution [3,6], and learning functions as opposed to sets (i.e., binary valued functions) [6]. As the name suggests, learning with respect to a fixed distribution refers to the case in which the distribution with which the samples are being drawn is fixed and known to the learner.…”
Section: Pac Learning With Generalized Samplesmentioning
confidence: 99%