2004
DOI: 10.1109/tit.2004.826687
|View full text |Cite
|
Sign up to set email alerts
|

The Kullback–Leibler Divergence Rate Between Markov Sources

Abstract: Abstract-In this work, we provide a computable expression for the Kullback-Leibler divergence rate lim ( ) between two time-invariant finite-alphabet Markov sources of arbitrary order and arbitrary initial distributions described by the probability distributions and , respectively. We illustrate it numerically and examine its rate of convergence. The main tools used to obtain the Kullback-Leibler divergence rate and its rate of convergence are the theory of nonnegative matrices and Perron-Frobenius theory. Sim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
70
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(73 citation statements)
references
References 15 publications
(12 reference statements)
1
70
0
Order By: Relevance
“…More specifically, it is sufficient to show the following inequality (14) holds for each 1 ≤ j ≤ q. Then an upper bound of ν(G x , G y ) as given in (12) will be smaller than (13), which is the right hand side of (10). For a finite p, the right hand side of (14) is given by…”
Section: '"mentioning
confidence: 99%
See 2 more Smart Citations
“…More specifically, it is sufficient to show the following inequality (14) holds for each 1 ≤ j ≤ q. Then an upper bound of ν(G x , G y ) as given in (12) will be smaller than (13), which is the right hand side of (10). For a finite p, the right hand side of (14) is given by…”
Section: '"mentioning
confidence: 99%
“…Although the K-L divergence does not satisfy the symmetry property and is not a metric, it provides useful interpretations for problems related to probability distributions. The usage of the K-L divergence rate as a measure of distance between two Markov chains has been widely accepted [13], [18]. Therefore, although taking d(·, ·) as the K-L divergence in (5) does not provide a metric, some notion of similarity can still be inferred.…”
Section: A Extension: When D(· ·) Is Not a P-normmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, where corresponds to the statistical mean under the distribution of . Considering as the initial states distribution of the process, and as the transition matrix (16) where (17) is the Shannon entropy rate of the first-order Markov process [47]. Thus, we can rewrite the log-likelihood ratio test in terms of the Shannon entropy rate of the processes as (18) In general, for and sufficiently large, the first term can be dismissed [46].…”
Section: B Kullback-leibler Merging Criterionmentioning
confidence: 99%
“…An appropriate metric to quantify the distance between two distributions x(i) and z( j) is the K-L divergence [23] given…”
Section: Model Reduction For Markov Chainsmentioning
confidence: 99%