Abstract-We introduce the notion of normalized entropicWhile, "in principle", it is possible to write down a characvectors-slightly different from the standard definition in the terization for the capacity region of most network information literature in that we normalize entropy by the logarithm of the theory problems, the difficulty is that this characterization is alphabet size. We argue that this definition is more natural for determining the capacity region of networks and, in particular, infinite-letter and non-convex. In other words, evaluating the that it smooths out the irregularities of the space of non-capacity region requires solving an infinite succession of nonnormalized entropy vectors and renders the closure of the convex optimization problems over certain distributions whose resulting space convex (and compact). Furthermore, the closure number of variables goes to infinity. This is in stark contrast of the space remains convex even under constraints imposed with point-to-point (single-user) memoryless channels where by memoryless channels internal to the network. It therefore follows that, for a large class of acyclic memoryless networks, the the characterizaton is both single-letter and convex.capacity region for an arbitrary set of sources and destinations can be found by maximization of a linear function over the S X convex set of channel-constrained normalized entropic vectors PXI5 (x15) and some linear constraints. While this may not necessarily make the problem simpler, it certainly circumvents the "infinite-letter characterization" issue, as well as the nonconvexity of earlier formulations, and exposes the core of the problem. We show that Fig. 1. A point-to-point communication problem. the approach allows one to obtain the classical cutset bounds via a duality argument. Furthermore, the approach readily shows that, To make this more explicit, consider the point-to-point for acyclic memoryless wired networks, one need only consider -the space of unconstrained normalized entropic vectors, thus memoryless channel of Fig. 1. The capacity is clearly separating channel and network coding-a result very recently C = max I(S; X) = max {H(X) -H(X S)}, (1) recognized in the literature. Ps ( ) Ps ( )
Abstract-It has recently been shown that there is a connection between Cayley's hypdeterminant and the principal minors of a symmetric matrix. With an eye towards characterizing the entropy region of jointly Gaussian random variables, we obtain three new results on the relationship between Gaussian random variables and the hyperdeterminant. The first is a new (determinant) formula for the 2 × 2 × 2 hyperdeterminant. The second is a new (transparent) proof of the fact that the principal minors of an n×n symmetric matrix satisfy the 2 × 2 × . . . × 2 (n times) hyperdeterminant relations. The third is a minimal set of 5 equations that 15 real numbers must satisfy to be the principal minors of a 4 × 4 symmetric matrix.
Given n (discrete or continuous) random variables Xi, the (2 n − 1)-dimensional vector obtained by evaluating the joint entropy of all non-empty subsets of {X1,. .. , Xn} is called an entropic vector. Determining the region of entropic vectors is an important open problem with many applications in information theory. Recently, it has been shown that the entropy regions for discrete and continuous random variables, though different, can be determined from one another. An important class of continuous random variables are those that are vector-valued and jointly Gaussian. It is known that Gaussian random variables violate the Ingleton bound, which many random variables such as those obtained from linear codes over finite fields do satisfy, and they also achieve certain non-Shannon type inequalities. In this paper we give a full characterization of the convex cone of the entropy region of three jointly Gaussian vector-valued random variables and prove that it is the same as the convex cone of three scalar-valued Gaussian random variables and further that it yields the entire entropy region of 3 arbitrary random variables. We further determine the actual entropy region of 3 vector-valued jointly Gaussian random variables through a conjecture. For n ≥ 4 number of random variables, we point out a set of 2 n − 1 − n(n+1) 2 minimal necessary and sufficient conditions that 2 n − 1 numbers must satisfy in order to correspond to the entropy vector of n scalar jointly Gaussian random variables. This improves on a result of Holtz and Sturmfels which gave a nonminimal set of conditions. These constraints are related to Cayley's hyperdeterminant and hence with an eye towards characterizing the entropy region of jointly Gaussian random variables, we also present some new results in this area. We obtain a new (determinant) formula for the 2 × 2 × 2 hyperdeterminant and we also give a new (transparent) proof of the fact that the principal minors of an n × n symmetric matrix satisfy the 2 × 2 ×. .. × 2 (up to n times) hyperdeterminant relations.
Abstract-The problem of determining the region of entropic vectors is a central one in information theory. Recently, there has been a great deal of interest in the development of nonShannon information inequalities, which provide outer bounds to the aforementioned region; however, there has been less recent work on developing inner bounds. This paper develops an inner bound that applies to any number of random variables and which is tight for 2 and 3 random variables (the only cases where the entropy region is known). The construction is based on probability distributions generated by a lattice. The region is shown to be a polytope generated by a set of linear inequalities. Study of the region for 4 and more random variables is currently under investigation.
Abstract-Although determining the space of entropic vectors for n random variables, denoted by Γ * n , is crucial for solving a large class of network information theory problems, there has been scant progress in explicitly characterizing Γ * n for n ≥ 4. In this paper, we present a certain characterization of quasi-uniform distributions that allows one to numerically stake out the entropic region via a random walk to any desired accuracy. When coupled with Monte Carlo Markov Chain (MCMC) methods, one may "bias" the random walk so as to maximize certain functions of the entropy vector. As an example, we look at maximizing the violation of the Ingleton inequality for four random variables and report a violation well in excess of what has been previously available in the literature. Inspired by the MCMC method, we also propose a framework for designing optimal nonlinear network codes via performing a random walk over certain truth tables. We show that the method can be decentralized and demonstrate its efficacy by applying it to the Vamos network and a certain storage problem from [1].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.