2022
DOI: 10.48550/arxiv.2203.14743
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Estimation and Optimization of Directed Information over Continuous Spaces

Abstract: This work develops a new method for estimating and optimizing the directed information rate between two jointly stationary and ergodic stochastic processes. Building upon recent advances in machine learning, we propose a recurrent neural network (RNN)-based estimator which is optimized via gradient ascent over the RNN parameters. The estimator does not require prior knowledge of the underlying joint and marginal distributions. The estimator is also readily optimized over continuous input processes realized by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(9 citation statements)
references
References 61 publications
(104 reference statements)
0
9
0
Order By: Relevance
“…The DINE [22] is an RNN-based estimator of I(X → Y) from a sample D n := (X n , Y n ) ∼ P X n Y n . Its derivation begins with a representation of DI rate as the asymptotic difference of the following KL divergence terms:…”
Section: Directed Information Neural Estimationmentioning
confidence: 99%
See 4 more Smart Citations
“…The DINE [22] is an RNN-based estimator of I(X → Y) from a sample D n := (X n , Y n ) ∼ P X n Y n . Its derivation begins with a representation of DI rate as the asymptotic difference of the following KL divergence terms:…”
Section: Directed Information Neural Estimationmentioning
confidence: 99%
“…, or in their parametrized form g θ , where θ ∈ Θ. With this notation, the DINE objective is given by [22]…”
Section: Directed Information Neural Estimationmentioning
confidence: 99%
See 3 more Smart Citations