1990
DOI: 10.1016/0885-2308(90)90022-x
|View full text |Cite
|
Sign up to set email alerts
|

The estimation of stochastic context-free grammars using the Inside-Outside algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
367
0
2

Year Published

1999
1999
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 388 publications
(373 citation statements)
references
References 13 publications
1
367
0
2
Order By: Relevance
“…Recently, we have proposed a 1-D grammar learner [3], and have shown that the 1-D grammar learner acquires knowledge more effectively and runs faster than the inside-outside algorithm [13] 3 . Hence, we further extend the one-dimensional grammar learner to acquire a 2-D pCFG from twodimensional training records.…”
Section: Learning Two-dimensional Display Layout Using Probabilistic mentioning
confidence: 99%
“…Recently, we have proposed a 1-D grammar learner [3], and have shown that the 1-D grammar learner acquires knowledge more effectively and runs faster than the inside-outside algorithm [13] 3 . Hence, we further extend the one-dimensional grammar learner to acquire a 2-D pCFG from twodimensional training records.…”
Section: Learning Two-dimensional Display Layout Using Probabilistic mentioning
confidence: 99%
“…The derivation is conceptually based on relevant frequency counting for discrete data which is a common practice for estimating PCFGs (Lari and Young 1990). In this section, we derive parameter estimation algorithm for the composite simplified lexical and semantic enhanced structural language model from the general EM algorithm (Dempster et al 1977).…”
Section: Training Algorithm For Simplified Lexical and Semantic Enhanmentioning
confidence: 99%
“…These models, which are subclasses of Bayesian networks, significantly reduce the high complexity of Bayesian networks by restricting the target to labeled trees or labeled ordered trees [19]. The extension of HTMM to PSTMM for trees is equivalent to that of the hidden Markov model (HMM) [3,17] to the probabilistic context free grammars (PCFGs) [12,14] for sequences (strings). Consequently, the time and space complexity of the learning algorithm of HTMM roughly increases by a factor of the number of states in PSTMM.…”
Section: Introductionmentioning
confidence: 99%