2007 IEEE Information Theory Workshop on Information Theory for Wireless Networks 2007
DOI: 10.1109/itwitwn.2007.4318051
|View full text |Cite
|
Sign up to set email alerts
|

Normalized Entropy Vectors, Network Information Theory and Convex Optimization

Abstract: Abstract-We introduce the notion of normalized entropicWhile, "in principle", it is possible to write down a characvectors-slightly different from the standard definition in the terization for the capacity region of most network information literature in that we normalize entropy by the logarithm of the theory problems, the difficulty is that this characterization is alphabet size. We argue that this definition is more natural for determining the capacity region of networks and, in particular, infinite-letter … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
4
2
1

Relationship

4
3

Authors

Journals

citations
Cited by 32 publications
(37 citation statements)
references
References 9 publications
0
37
0
Order By: Relevance
“…While both [8] and [9] rely on the max-flow min-cut theorem [10] to show the converse (cf. upper bounding model), similar results on channel-network separation have been established in [11] by using normalized entropy vectors. For point-to-point channels, the same upper bounding models established in [1] have also been developed in [12] for DMCs with finite-alphabet, and in [13] under the notion of strong coordination, where total variation (i.e., an additive gap) is used to measure the difference between the desired joint distribution and the empirical joint distribution of a pair of sequences (or a pair of symbols as in empirical coordination).…”
Section: Introductionmentioning
confidence: 52%
“…While both [8] and [9] rely on the max-flow min-cut theorem [10] to show the converse (cf. upper bounding model), similar results on channel-network separation have been established in [11] by using normalized entropy vectors. For point-to-point channels, the same upper bounding models established in [1] have also been developed in [12] for DMCs with finite-alphabet, and in [13] under the notion of strong coordination, where total variation (i.e., an additive gap) is used to measure the difference between the desired joint distribution and the empirical joint distribution of a pair of sequences (or a pair of symbols as in empirical coordination).…”
Section: Introductionmentioning
confidence: 52%
“…Γ * n denotes the space of non-normalized entropies. Theorem 1 (Convexity ofΩ * n ): The closure of the set of entropic vectors,Ω * n is convex [6]. and N y respectively, and let h x , h y ∈ Ω * n be the corresponding normalized entropy vectors.…”
Section: Network Problem and Entropy Vectorsmentioning
confidence: 99%
“…Note that the validity of the proofs of Theorem 1 will not be affected when the underlying distributions satisfy some linear channel constraints. Therefore if we denote the space of entropic vectors that are constrained by the discrete memoryless channels in the network by Ω * n,c , we have [6], Theorem 2 (Channel-Constrained Entropic Vectors): Closure of the channel constrained entropic vectors,Ω * n,c , is convex.…”
Section: Network Problem and Entropy Vectorsmentioning
confidence: 99%
See 1 more Smart Citation
“…The set of all entropy vectors derived from n random variables is denoted by Γ * n and its closure,Γ * n is well known to be a convex cone. The entropy region is of great importance since maximizing the (weighted) throughput for a large class of wired acyclic networks can be reduced to convex optimization overΓ * n [2], [3]. Despite the importance attached to the entropy region, there exists very little in the way of explicitly characterizinḡ Γ * n for n ≥ 4 random variables.…”
Section: Introductionmentioning
confidence: 99%