Topics in Grammatical Inference 2016
DOI: 10.1007/978-3-662-48395-4_6
|View full text |Cite
|
Sign up to set email alerts
|

Distributional Learning of Context-Free and Multiple Context-Free Grammars

Abstract: This paper presents an algorithm for strong learning of probabilistic multiple context free grammars from a positive sample of strings generated by the grammars. The algorithm is shown to be a consistent estimator for a class of well-nested grammars, given by explicit structural conditions on the underlying grammar, and for grammars in this class is guaranteed to converge to a grammar which is isomorphic to the original, not just one that generates the same set of strings.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 57 publications
0
8
0
Order By: Relevance
“…This condition is very close to a number of conditions that have been proposed in the literature both for topic modeling and for grammatical inference: We use here the terminology of Stratos et al (2016), but similar ideas occur in, for example, Adriaans's (1999) approach to learning CFGs and Denis et al's (2004) approach to learning regular languages. This is also very closely related to what is called the 1-Finite Kernel Property in distributional learning of CFGs (Clark and Yoshinaka, 2016).…”
Section: Structural Conditions On Grammarsmentioning
confidence: 74%
See 1 more Smart Citation
“…This condition is very close to a number of conditions that have been proposed in the literature both for topic modeling and for grammatical inference: We use here the terminology of Stratos et al (2016), but similar ideas occur in, for example, Adriaans's (1999) approach to learning CFGs and Denis et al's (2004) approach to learning regular languages. This is also very closely related to what is called the 1-Finite Kernel Property in distributional learning of CFGs (Clark and Yoshinaka, 2016).…”
Section: Structural Conditions On Grammarsmentioning
confidence: 74%
“…This can be naturally extended, mutatis mutandis, to sets of exemplars, and to exemplars with length greater than 1. The extension beyond CFGs to mildly context sensitive grammars such as MCFGs (Seki et al, 1991) seems to present some problems that do not occur in the nonprobabilistic case (Clark and Yoshinaka, 2016); although the same bounds on the bottom up parameters can be derived, identifying the set of anchors seems to be challenging.…”
Section: Discussionmentioning
confidence: 99%
“…Because of the works on learning context-free grammars Eyraud, 2007, Clark et al, 2010] and beyond context-free [Clark and Yoshinaka, 2014], we conjecture that it would not be the case. Indeed, bi-directional RNN use information from both the prefix and the suffix of a given element, which positions them within the spectrum of distributional learning [Clark and Yoshinaka, 2016]. The work on spectral learning of non finite state models [Bailly et al, 2013] would then be then a good starting point for a distillation process for these networks.…”
Section: Discussionmentioning
confidence: 99%
“…However, it turns out that MIX is substitutable, so such an account would not be supported by Distributional Learning as a model of language acquisition. Proposition 13 (Clark and Yoshinaka, 2016).…”
Section: [)]mentioning
confidence: 99%