An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family.Studied is the metaproblem of synthesizing from indices for r.e. classes and for indexed families of languages various kinds of language-learners for the corresponding classes or families indexed.Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The negative results essentially provide lower bounds for the positive results. The proofs of some of the positive results yield, as pleasant corollaries, subset-principle or tell-tale style characterizations for the learnability of the corresponding classes or families indexed.For example, the indexed families of recursive languages that can be behaviorally correctly identified from positive data are surprisingly characterized by Angluin's (1980b) Condition 2 (the subset principle for circumventing overgeneralization). 2 The problem is not that correct grammars for finite classes of languages can't be learned in the limit; they can (Osherson, Stob and Weinstein (1986a)), and by an obvious enumeration technique. The problem is how to pass algorithmically from a list of grammars to a machine which so learns the corresponding languages.A study of the proof of the result shows that, intuitively, the difficulty, given a pair of grammars g 1 , g 2 for a language class L = {L 1 , L 2 }, to synthesize a TxtEx-learner successful on L is in deciding from g 1 , g 2 whether or not L 1 = L 2 . This equivalence problem is well-known to be algorithmically unsolvable (Rogers (1967)).3 Bc is short for behaviorally correct. 4
Gold-style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In traditional Gold-style language learning, learning machines are not provided with negative information, i.e., information about the complements of the input languages. We investigate two approaches to providing small amounts of negative information and demonstrate in each case a strong resulting increase in learning power. Finally, we show that small packets of negative information also lead to increased speed of learning. This result agrees with a psycholinguistic hypothesis of McNeill correlating the availability of parental expansions with the speed of child language development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.