1965
DOI: 10.1109/tit.1965.1053826
|View full text |Cite
|
Sign up to set email alerts
|

A note on the iterative application of Bayes' rule

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

1966
1966
2015
2015

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(16 citation statements)
references
References 7 publications
0
15
0
Order By: Relevance
“…and successively observed samples/blocks, which are served as the adaptation data to estimate the transformation parameters . The a posteriori density of satisfies the following recursive relation [11], [37]:…”
Section: A General Formulationmentioning
confidence: 99%
“…and successively observed samples/blocks, which are served as the adaptation data to estimate the transformation parameters . The a posteriori density of satisfies the following recursive relation [11], [37]:…”
Section: A General Formulationmentioning
confidence: 99%
“…Applying the Bayes theorem, we obtain a recursive expression for the a posteriori pdf of , given , as (4) Starting with the calculation of the posterior pdf from , a repeated use of (4) produces the sequence of densities , and so forth. This provides a basis of making recursive Bayesian inference of parameters [35]. Unfortunately, the implementation of this learning procedure for incremental CDHMM training raises some serious computational difficulties because of the nature of the missingdata problem caused by the underlying hidden processes, i.e., the state mixture component label sequence and the state sequence of the Markov chain for an HMM.…”
Section: Incremental Bayes Learningmentioning
confidence: 99%
“…Unfortunately, the implementation of this learning procedure for incremental CDHMM training raises some serious computational difficulties because of the nature of the missingdata problem caused by the underlying hidden processes, i.e., the state mixture component label sequence and the state sequence of the Markov chain for an HMM. It is well known that there exist no reproducing (natural conjugate) densities [35], [8], [12] for CDHMM. To illustrate this problem more clearly, let us begin with and consider what happens after a training utterance (sample) is observed.…”
Section: Incremental Bayes Learningmentioning
confidence: 99%
“…Unfortunately, the Bayesian approach is computationally infeasible for most practical choices of p ( x I HI, w,). For the Bayesian scheme to be feasible, this unsupervised clustering problem has to be converted to a supervised one, since in this case, for most practical choices of p ( x I Hj,vj), a sufficient statistic vector exists for v (Spragins 1965). Based on this argument, Agrawala has proposed his "learning with a probabilistic teacher" scheme (Agrawala 1970).…”
Section: Learning With a Probabilistic Teachermentioning
confidence: 99%
“…However, for both algorithms, before training, the variances of all units are set to a fixed value 0, and thus, only the centers are to be trained. Hence, known to be a gaussian density (Spragins 1965), that is, s, = (c;, q)', (3.41) and (3.42) where the variance 0, reflects the confidence in C, as the estimate of c,.…”
Section: Pwta-ementioning
confidence: 99%