2012
DOI: 10.1016/j.csl.2012.04.001
|View full text |Cite
|
Sign up to set email alerts
|

The latent words language model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0
1

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 28 publications
(32 citation statements)
references
References 19 publications
0
31
0
1
Order By: Relevance
“…To further advance towards domain robust ASR, this paper focuses on the latent words LMs (LWLMs) recently proposed in the machine learning area [12]. LWLMs are generative models that have latent variables called latent words.…”
Section: Introductionmentioning
confidence: 99%
“…To further advance towards domain robust ASR, this paper focuses on the latent words LMs (LWLMs) recently proposed in the machine learning area [12]. LWLMs are generative models that have latent variables called latent words.…”
Section: Introductionmentioning
confidence: 99%
“…P (h k |l k , θ) represents the transition probability which can be expressed by an n-gram model for latent words, and P (w k |h k , θ) represents the emission probability that models the dependency between the observed word and the latent word. More details are shown in previous works (Deschacht et al, 2012;Masumura et al, 2013a;Masumura et al, 2013b).…”
Section: Latent Words Language Modelsmentioning
confidence: 99%
“…LWLMs are generative models with single latent word space (Deschacht et al, 2012). The latent word is represented as a specific word that is selected from the entire vocabulary.…”
Section: Latent Words Language Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, neural networks or Bayesian network models are trained with a large collection of texts to learn distributional semantic models or language models that capture the contexts of a word. As a result, a word can be represented as a vector (i.e., a word embedding) the dimensions of which embed the contextual knowledge (e.g., [43]) or as a predictive language model (e.g., [19]). These works build further on older research of word representations that were learned from large text corpora that predict a word in the context of surrounding words or that learn topical concepts by considering co-occurrences in a full discourse.…”
Section: Current Proposed Solutions: Representation Learning and Deepmentioning
confidence: 99%