2008
DOI: 10.1016/j.tcs.2008.02.030
|View full text |Cite
|
Sign up to set email alerts
|

Learning indexed families of recursive languages from positive data: A survey

Abstract: In the past 40 years, research on inductive inference has developed along different lines, e.g., in the formalizations used, and in the classes of target concepts considered. One common root of many of these formalizations is Gold's model of identification in the limit. This model has been studied for learning recursive functions, recursively enumerable languages, and recursive languages, reflecting different aspects of machine learning, artificial intelligence, complexity theory, and recursion theory. One lin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
1

Year Published

2009
2009
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(31 citation statements)
references
References 83 publications
0
30
1
Order By: Relevance
“…The main reason for hypothesis space not to be critical in many cases is that one can automatically convert the indices from one automatic family to another for the languages which are common to both automatic families. This stands in a contrast to the corresponding results for indexed families of recursive languages [25,26]. A result in the present work which depends on the choice of the hypothesis space is Theorem 18.…”
Section: Learning With Feedback and Memory Limitationscontrasting
confidence: 77%
See 1 more Smart Citation
“…The main reason for hypothesis space not to be critical in many cases is that one can automatically convert the indices from one automatic family to another for the languages which are common to both automatic families. This stands in a contrast to the corresponding results for indexed families of recursive languages [25,26]. A result in the present work which depends on the choice of the hypothesis space is Theorem 18.…”
Section: Learning With Feedback and Memory Limitationscontrasting
confidence: 77%
“…An advantage of an automatic family over general indexed families [1,24,26] is that the first-order theory of automatic families, as well as of automatic structures in general, is decidable [15,16,22]. Here in the first-order theory, the predicates (relations) and functions (mappings) allowed are automatic.…”
Section: Introductionmentioning
confidence: 99%
“…However, it is also shown in [JK07] that, surprisingly, such NCIt-learners cannot learn indexed classes class-preservingly (cf. [LZZ08]), that is, using a numbering of languages containing exactly the target class (and no other languages). Still class-preserving learnability is important, as any natural hypotheses space for an indexed class is class-preserving.…”
Section: Some General Effects Of Additional Information On Ncit-learningmentioning
confidence: 99%
“…However, as it was established in [JK07], NCIt-learners sometimes cannot learn an indexed class class-preservingly (cf. [LZZ08])-that is, they cannot learn by using any descriptive numbering defining just the target class as the hypothesis space. It turns out that this result regarding NCIt-learners holds even if the NCIt-learners are allowed to make n-feedback membership queries (see Theorem 14).…”
Section: Introductionmentioning
confidence: 99%
“…However, as it was established in Jain and Kinber (2007), NCIt-learners sometimes cannot learn an indexed class class-preservingly (cf. Lange et al 2008)-that is, they cannot learn by using any descriptive numbering defining just the target class as the hypothesis space. It turns out that this result regarding NCIt-learners holds even if the NCIt-learners are allowed to make n-feedback membership queries (see Theorem 14).…”
mentioning
confidence: 99%