2016
DOI: 10.1017/etds.2015.90
|View full text |Cite
|
Sign up to set email alerts
|

Asymptotic entropy of transformed random walks

Abstract: We consider general transformations of random walks on groups determined by Markov stopping times and prove that the asymptotic entropy (respectively, rate of escape) of the transformed random walks is equal to the asymptotic entropy (respectively, rate of escape) of the original random walk multiplied by the expectation of the corresponding stopping time. This is an analogue of the well-known Abramov formula from ergodic theory; its particular cases were established earlier by Kaimanovich [Differential entrop… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 14 publications
0
11
0
Order By: Relevance
“…Remark 3.17. The results of Forghani [10] are somehow in the same spirit as Theorem 3.16. It is shown there that the asymptotic entropy of transformed random walks (by stopping times) equals the product of the original entropy and the expectation of the corresponding stopping times.…”
Section: The Entropy Of the Retracted Walkmentioning
confidence: 60%
See 1 more Smart Citation
“…Remark 3.17. The results of Forghani [10] are somehow in the same spirit as Theorem 3.16. It is shown there that the asymptotic entropy of transformed random walks (by stopping times) equals the product of the original entropy and the expectation of the corresponding stopping times.…”
Section: The Entropy Of the Retracted Walkmentioning
confidence: 60%
“…1 n E[log π n (X n )] if the limit exists, where π n is the distribution of X n . Intuitively, the entropy can be seen as the "asymptotic uncertainty" of the random walk and it plays an important role in the description of the asymptotic behavior of the random walk, see Derriennic [9,10], Guivarc?h [15], Kaimanovich [17], Kaimanovich and Vershik [18] and Vershik [24], amongst others. It is well-known that the limit defining h necessarily exists for random walks on groups whenever E[− log π 1 (X 1 )] < ∞ (this is an application of Kingman's subadditive ergodic theorem [17]).…”
Section: Introductionmentioning
confidence: 99%
“…Remark 6.6. Note that when G is a discrete group and µ has finite entropy, then H(µ τ ) ≤ E(τ )H 1 , see [For17]. However, in the case of locally compact groups, the same statement does not hold for differential entropies.…”
Section: Stopping Time Trickmentioning
confidence: 99%
“…L , and µ τ = θ = n=0 β * n * α. The probability measure θ was introduced by Willis [Wil90]; transformations of random walks via stopping times are due to the first author and Kaimanovich, who show these transformations preserve the Poisson boundaries for random walks on discrete groups (for more details see [For15] and [For17]).…”
Section: Stopping Time Trickmentioning
confidence: 99%
“…We also need the following Abramov-type formula, which generalizes Lemma 2.5 of [For17]. Proposition 6.4.…”
Section: Stopping Times and Induced Random Walksmentioning
confidence: 99%