We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a 'heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression.The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-logarithmically with the sequence length). We show that the proposed algorithms achieve optimum rate-distortion performance in the limits of large number of iterations, and sequence length, when employed on any stationary ergodic source. Experimentation shows promising initial results.Employing our lossy compressors on noisy data, with appropriately chosen distortion measure and level, followed by a simple de-randomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).
Abstract-One major open problem in network coding is to characterize the capacity region of a general multi-source multi-demand network. There are some existing computational tools for bounding the capacity of general networks, but their computational complexity grows very quickly with the size of the network. This motivates us to propose a new hierarchical approach which finds upper and lower bounding networks of smaller size for a given network. This approach sequentially replaces components of the network with simpler structures, i.e., with fewer links or nodes, so that the resulting network is more amenable to computational analysis and its capacity provides an upper or lower bound on the capacity of the original network. The accuracy of the resulting bounds can be bounded as a function of the link capacities. Surprisingly, we are able to simplify some families of network structures without any loss in accuracy.
No abstract
In this paper, we study the effect of a single link on the capacity of a network of error-free bit pipes. More precisely, we study the change in network capacity that results when we remove a single link of capacity δ. In a recent result, we proved that if all the sources are directly available to a single supersource node, then removing a link of capacity δ cannot change the capacity region of the network by more than δ in each dimension. In this paper, we extend this result to the case of multi-source, multi-sink networks for some special network topologies.
The main promise of compressed sensing is accurate recovery of high-dimensional structured signals from an underdetermined set of randomized linear projections. Several types of structure such as sparsity and low-rankness have already been explored in the literature. For each type of structure, a recovery algorithm has been developed based on the properties of signals with that type of structure. Such algorithms recover any signal complying with the assumed model from its sufficient number of linear projections. However, in many practical situations the underlying structure of the signal is not known, or is only partially known. Moreover, it is desirable to have recovery algorithms that can be applied to signals with different types of structure.In this paper, the problem of developing universal algorithms for compressed sensing of stochastic processes is studied. First, Rényi's notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) optimization approach is proposed, and it is proven that it can reliably recover any stationary process satisfying some mixing constraints from sufficient number of randomized linear measurements, without having any prior information about the distribution of the process. It is proved that a Lagrangian-type approximation of the MEP optimization problem, referred to as Lagrangian-MEP problem, is identical to a heuristic implementable algorithm proposed by Baron et al. It is shown that for the right choice of parameters the Lagrangian-MEP algorithm, in addition to having the same asymptotic performance as MEP optimization, is also robust to the measurement noise. For memoryless sources with a discrete-continuous mixture distribution, the fundamental limits of the minimum number of required measurements by a non-universal compressed sensing decoder is characterized by Wu et al. For such sources, it is proved that there is no loss in universal coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the optimal performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.