Proceedings of the Fifth Annual Workshop on Computational Learning Theory 1992
DOI: 10.1145/130385.130427
|View full text |Cite
|
Sign up to set email alerts
|

Types of monotonic language learning and their characterization

Abstract: The present paper deals with strong-monoto-

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

1995
1995
2009
2009

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 69 publications
(31 citation statements)
references
References 13 publications
0
31
0
Order By: Relevance
“…This contrasts with the situation for learning from texts, where every indexed family which can be strongly monotonically identified from texts can be strongly monotonically identified by an iterative learner from texts (using some appropriate hypothesis space) [LZ92].…”
Section: Resultsmentioning
confidence: 93%
See 2 more Smart Citations
“…This contrasts with the situation for learning from texts, where every indexed family which can be strongly monotonically identified from texts can be strongly monotonically identified by an iterative learner from texts (using some appropriate hypothesis space) [LZ92].…”
Section: Resultsmentioning
confidence: 93%
“…We show that there exists an indexed family which can be strongly monotonically identified using a class preserving hypothesis space but cannot be strongly monotonically identified by an iterative learner, even if the iterative learner is free in the choice of the hypothesis space used. This contrasts with the situation for learning from texts, where every indexed family which can be strongly monotonically identified can also be strongly monotonically identified by an iterative learner using some appropriate hypothesis space [LZ92].…”
Section: Introductionmentioning
confidence: 88%
See 1 more Smart Citation
“…(This latter subset principle, for preventing overgeneralization, is further discussed, for example, in [8,12,13,24,34,3,35,23]. Mukouchi [27] and Lange and Zeugmann [25] present a subset principle for one-shot learning. ) Even at the TxtBc levels, noise is problematic.…”
Section: Disadvantages Of Having Noise In the Inputmentioning
confidence: 99%
“…In Wiehagen (1976) CON S ⊂ IT ⊂ LIM has been proved. Recent papers give new evidence for the power of iterative learning (cf., e.g., Porat and Feldman (1988), Lange and Wiehagen (1991), Lange and Zeugmann (1992)). …”
Section: The General Inconsistency Phenomenonmentioning
confidence: 99%