1997
DOI: 10.1016/s0304-3975(97)00017-0
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic language learning under monotonicity constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

1998
1998
2014
2014

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Probabilistic language learning [18] in the limit is the most interesting among those. Other recently studied models are probabilistic language learning with monotonicity restrictions [27] and probabilistic learning up to a small set of errors [22].…”
Section: Conclusion and Related Workmentioning
confidence: 99%
“…Probabilistic language learning [18] in the limit is the most interesting among those. Other recently studied models are probabilistic language learning with monotonicity restrictions [27] and probabilistic learning up to a small set of errors [22].…”
Section: Conclusion and Related Workmentioning
confidence: 99%
“…Furthermore, most concepts have a break-even point at some probability c < 1 in the sense that whenever such concepts are learnable with probability c, they are already learnable by a deterministic machine [1]. Meyer [12] showed that exact monotonic and exact conservative learning with any probability c < 1 is more powerful than deterministic learning; still in the case c = 1, the probabilistic and deterministic variants are again the same. In [8], the notions of effective measure and category are used to discuss the relative sizes of inferable sets and their complements.…”
Section: Introductionmentioning
confidence: 99%
“…[ 141). In these cases, the learning power of probabilistic machines increases even if the probability has to be close to 1.…”
Section: Introductionmentioning
confidence: 99%
“…Previous work in this field (cf. [ 141) established that the probabilistic hierarchy in the case of proper probabilistic learning is dense, i.e., there is a dense set of rational numbers D C [0, l] such that for each p E D there is an indexed family C, which is conservatively learnable with probability p but not with probability q > p. In the case of class preserving conservative learning, we proved that for each p 2 l/2, there is an indexed family which is conservatively identifiable with a probability 1 > plL > p but not with probability (1 > pn . Thus, conservative probabilistic learning is much stronger than conservative deterministic learning provided the machines are restricted to proper or class preserving hypothesis spaces.…”
Section: Introductionmentioning
confidence: 99%