2007
DOI: 10.1007/978-3-540-73208-2_23
|View full text |Cite
|
Sign up to set email alerts
|

Bisimulation Minimisation for Weighted Tree Automata

Abstract: Abstract. We generalise existing forward and backward bisimulation minimisation algorithms for tree automata to weighted tree automata. The obtained algorithms work for all semirings and retain the time complexity of their unweighted variants for all additively cancellative semirings. On all other semirings the time complexity is slightly higher (linear instead of logarithmic in the number of states). We discuss implementations of these algorithms on a typical task in natural language processing.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(41 citation statements)
references
References 18 publications
(26 reference statements)
0
41
0
Order By: Relevance
“…Reduction of automata from some of such classes has already been considered in the literature (e.g., in [8], the author proposes a bisimulation-based minimisation of weighted word automata, and a use of bisimulations for reducing weighted tree automata is considered in [10]). …”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Reduction of automata from some of such classes has already been considered in the literature (e.g., in [8], the author proposes a bisimulation-based minimisation of weighted word automata, and a use of bisimulations for reducing weighted tree automata is considered in [10]). …”
Section: Discussionmentioning
confidence: 99%
“…We prove this claim by the following counterexample. 10 and F = Q. Let us choose the relation R = id ∪ {(q, r), (r, t), (q, t)}, which is apparently contained in LP , as the inducing preorder.…”
Section: Impossibility Of Relaxing the Need Of Downward Simulationsmentioning
confidence: 99%
“…The bijection will be established by evaluating the access trees in N. Formally, we let h: Q → P be such that h(q) = μ(t q ) for every q ∈ Q , which immediately proves that h(q) ∈ Ker(N) for all q ∈ Ker(M) because ht(t q ) > |Q × P |. Moreover, for each q ∈ Ker(M) the facts M ∼ N and 9 Finally, for every p ∈ Ker(N), select u p ∈ L(N) p such that ht(u p ) > |Q × P |. Clearly, δ(u p ) ∈ Ker(M) and by Lemma 3 we obtain…”
Section: Theorem 6 If M ∼ N and Both M And N Are Hyper-minimal Thenmentioning
confidence: 98%
“…Classical examples in the area of natural language processing include the estimation of probabilistic tree automata [1][2][3] for parsing [4], probabilistic finite-state automata [5] for speech recognition [6], and probabilistic finite-state transducers [7] for machine translation [8]. Since those models tend to become huge, various efficient minimization techniques (such as the merge-steps in the Berkeley parser training [4] or bisimulation minimization [9,10]) are used. Unfortunately, already the computation of an equivalent minimal nondeterministic finite-state automaton (NFA) [11] given an input NFA is PSPACE-complete [12] and thus inefficient; this remains true even if the input NFA is deterministic.…”
Section: Introductionmentioning
confidence: 99%
“…8 runs in time OðM 2 N 3 Þ for a directed graph with N vertices and M edges. If the network has labels on the edges, complexity is increased by a factor as large as the size of the number of different labels.If the network has integers as weights (and we assume that comparisons can be done in constant time) the time complexity remains the same (Högberg et al 2007). …”
Section: Algorithmmentioning
confidence: 99%