1997 IEEE International Conference on Acoustics, Speech, and Signal Processing
DOI: 10.1109/icassp.1997.596057
|View full text |Cite
|
Sign up to set email alerts
|

K-TLSS(S) language models for speech recognition

Abstract: The class of K-Testable Languages in the Strict Sense (K-TLSS) is a subclass of regular languages. Previous works demonstrate that stochastic K-TLSS language models describe the same probability distribution as N-gram models, and that smoothing techniques can be efficiently applied (Back-off like methods). Once we have a set of k-TLSS models (k = 1. . . K) and a smoothing technique that specifically fits in them, here we propose an integration into a unique self-contained model (the K-TLSS( S)) which embeds th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0
1

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 12 publications
0
5
0
1
Order By: Relevance
“…To obtain the recognition rates, a test set of 600 utterances was used. The acoustic models of the system were fixed, and the language modelling part was implemented by means of K-TLSS(S) (KTestable Language in the Strict Sense, smoothed) which are a kind of Variable N-grams [7]. 8262 sentences were used for training and 1147 for test.…”
Section: Resultsmentioning
confidence: 99%
“…To obtain the recognition rates, a test set of 600 utterances was used. The acoustic models of the system were fixed, and the language modelling part was implemented by means of K-TLSS(S) (KTestable Language in the Strict Sense, smoothed) which are a kind of Variable N-grams [7]. 8262 sentences were used for training and 1147 for test.…”
Section: Resultsmentioning
confidence: 99%
“…The acoustic models were fixed and the language modelling part has been implemented by means of K-TLSS(S) (K-Testable Language in the Strict Sense, Smoothed) which are a kind of Variable N-grams [6]. 8.262 sentences have been used for training.…”
Section: Lexical Unit Based Recognition Systemmentioning
confidence: 99%
“…Let us remember that n-grams are essentially a family of regular languages. 45 Indeed, several authors have proposed to use WFSTs to represent n-gram models [Riccardi et al 1995;Bordel et al 1997;Llorens Piñana 2000;Vidal et al 2005b], an issue to be discussed in Section 9.2.…”
Section: Weighted Finite Automatamentioning
confidence: 99%
“…Relating the use of this formalism to represent LMs, it is widely known that n-grams are a particular class of regular languages . Several authors have proposed to use WFSTs to represent n-gram models [Riccardi et al 1995;Bordel et al 1997;Llorens Piñana 2000;Vidal et al 2005b]. But note that other types of WFST-based LMs exists besides n-grams.…”
Section: Automaton Interfacementioning
confidence: 99%
See 1 more Smart Citation