Interspeech 2013 2013
DOI: 10.21437/interspeech.2013-449
|View full text |Cite
|
Sign up to set email alerts
|

Failure transitions for joint n-gram models and G2p conversion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…Traditionally G2P is done with joint-sequence n-gram models [1,2], [13,14]. While these are still considered to be state of the art models best designed for single lexicons, they require explicit alignment information to be provided during training.…”
Section: Related Workmentioning
confidence: 99%
“…Traditionally G2P is done with joint-sequence n-gram models [1,2], [13,14]. While these are still considered to be state of the art models best designed for single lexicons, they require explicit alignment information to be provided during training.…”
Section: Related Workmentioning
confidence: 99%
“…We allow failure transitions [2,38] in the target WFA, which are taken only when no immediate match is possible at a given state, for compactness. For example, in the WFA representation of a backoff k-gram model, failure transitions can compactly implement the backoff [35,18,5,43,31]. The inclusion of failure transitions will complicate our analysis and algorithms but is highly desirable in applications such as keyboard decoding.…”
Section: Introductionmentioning
confidence: 99%