1996
DOI: 10.1007/3-540-61863-5_53
|View full text |Cite
|
Sign up to set email alerts
|

Vacillatory and BC learning on noisy data

Abstract: The present work employs a model of noise introduced earlier by the third author. In this model noisy data nonetheless uniquely determines the true data: correct information occurs infinitely often while incorrect information occurs only finitely often. The present paper considers the effects of this form of noise on vacillatory and behaviorally correct learning of grammars -both from positive data alone and from informant (positive and negative data). For learning from informant, the noise, in effect, destroy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

1997
1997
2022
2022

Publication Types

Select...
3
2
2

Relationship

4
3

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…Regarding the names of the learning criteria studied in the present paper, originally [31] "Ex" stood for "explanatory," "Fex" stood for "finitely explanatory," and Bc for "behaviorally correct." tion, however, that there are some interesting effects on learning power for vacillatory function learning wrought by bounding suitably sensitive measures of the computational complexity of the learning functions themselves [24] and by the introduction of noisy input data [25].…”
mentioning
confidence: 99%
“…Regarding the names of the learning criteria studied in the present paper, originally [31] "Ex" stood for "explanatory," "Fex" stood for "finitely explanatory," and Bc for "behaviorally correct." tion, however, that there are some interesting effects on learning power for vacillatory function learning wrought by bounding suitably sensitive measures of the computational complexity of the learning functions themselves [24] and by the introduction of noisy input data [25].…”
mentioning
confidence: 99%
“…Case, Jain and Sharma (1997) present positive and negative results regarding synthesizers of language learners which tolerate noisy data, where noise is modeled as in Stephan (1995), Case, Jain and Stephan (1996). Furthermore, the proofs of the positive results provide characterizations of corresponding noisetolerantly learnable language classes.…”
mentioning
confidence: 99%
“…Since, from a machine M, one can effectively construct a machine M which NoisyInfBc a -identifies NoisyInfEx 2a (M) (see [CJS96]), we immediately have (using Corollary 4 for the * -case) the following result about effective synthesis for NoisyInfBc a -identification.…”
Section: When Effective Synthesis Is Possiblementioning
confidence: 95%
“…As we will see, in the present paper, the proofs of most of our positive results which provide the existence of learner-synthesizers which synthesize noise-tolerant learners also yield pleasant characterizations which look like strict versions of the subset principle (1). 4 We consider language learning from both texts (only positive data) and from informants (both positive and negative data), and we adopt Stephan's [Ste95,CJS96] noise model for the present study. Roughly, in this model correct information about an object occurs infinitely often while incorrect information occurs only finitely often.…”
Section: Introductionmentioning
confidence: 99%