2012
DOI: 10.1016/j.tcs.2012.07.017
|View full text |Cite
|
Sign up to set email alerts
|

Learning in the limit with lattice-structured hypothesis spaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(23 citation statements)
references
References 26 publications
0
20
0
Order By: Relevance
“…A positive answer would have important repercussions for linguistics as well as natural language processing. The subregular classes identified in computational phonology are learnable in the limit from positive text (Heinz et al, 2012), so a subregular theory of morphology would greatly simplify machine learning while also explaining how morphological dependencies can be acquired by the child from very little input. A subregular model of morphology would also be much more restricted with respect to what processes are predicted to arise in natural languages.…”
mentioning
confidence: 99%
“…A positive answer would have important repercussions for linguistics as well as natural language processing. The subregular classes identified in computational phonology are learnable in the limit from positive text (Heinz et al, 2012), so a subregular theory of morphology would greatly simplify machine learning while also explaining how morphological dependencies can be acquired by the child from very little input. A subregular model of morphology would also be much more restricted with respect to what processes are predicted to arise in natural languages.…”
mentioning
confidence: 99%
“…In contrast to regular languages, tier-based strictly local languages are efficiently learnable in the limit from positive text (Heinz et al, 2012;Jardine and Heinz, 2016). Our result thus marks a first step towards provably correct machine learning algorithms for natural language morphology.…”
Section: Resultsmentioning
confidence: 78%
“…The subregular hierarchy includes many other classes (see Fig. 1), but the previous three are noteworthy because they are conceptually simple and efficiently learnable in the limit from positive data (Heinz et al, 2012;Jardine and Heinz, 2016) while also furnishing sufficient power for a wide range of phonological phenomena (Heinz, 2015;Jardine, 2015).…”
Section: Subregular Patterns In Morphologymentioning
confidence: 99%
“…A formal definition of a GIM which learns in this way is provided later in Definition 3. The SL k languages are identifiable in the limit from positive presentations [19] with a poly-time iterative and set-driven learner [20].…”
Section: B Grammatical Inferencementioning
confidence: 99%
“…The rules of the game are: Rule 1: at each round, either no door opens (thus ǫ ∈ Σ2), or one opens and another one closes; Rule 2: doors closed must be opposite to each other, that is, Σ2 = {ad, ae, af, bf, ce, ef, ǫ} where ij, denotes doors i and j being closed. Rule 1 makes L2(G) a strictly 2-local (SL2) language [20], [23]. The graph of this language acceptor can be represented with a Myhill graph.…”
Section: Playing For Realmentioning
confidence: 99%