Abstract-We characterize the best achievable performance of lossy compression algorithms operating on arbitrary random sources, and with respect to general distortion measures. Direct and converse coding theorems are given for variable-rate codes operating at a fixed distortion level, emphasizing: a) nonasymptotic results, b) optimal or near-optimal redundancy bounds, and c) results with probability one. This development is based in part on the observation that there is a precise correspondence between compression algorithms and probability measures on the reproduction alphabet. This is analogous to the Kraft inequality in lossless data compression. In the case of stationary ergodic sources our results reduce to the classical coding theorems. As an application of these general results, we examine the performance of codes based on mixture codebooks for discrete memoryless sources. A mixture codebook (or Bayesian codebook) is a random codebook generated from a mixture over some class of reproduction distributions. We demonstrate the existence of universal mixture codebooks, and show that it is possible to universally encode memoryless sources with redundancy of approximately ( 2) log bits, where is the dimension of the simplex of probability distributions on the reproduction alphabet.
Abstract-We characterize the best achievable performance of lossy compression algorithms operating on arbitrary random sources, and with respect to general distortion measures. Direct and converse coding theorems are given for variable-rate codes operating at a fixed distortion level, emphasizing: a) nonasymptotic results, b) optimal or near-optimal redundancy bounds, and c) results with probability one. This development is based in part on the observation that there is a precise correspondence between compression algorithms and probability measures on the reproduction alphabet. This is analogous to the Kraft inequality in lossless data compression. In the case of stationary ergodic sources our results reduce to the classical coding theorems. As an application of these general results, we examine the performance of codes based on mixture codebooks for discrete memoryless sources. A mixture codebook (or Bayesian codebook) is a random codebook generated from a mixture over some class of reproduction distributions. We demonstrate the existence of universal mixture codebooks, and show that it is possible to universally encode memoryless sources with redundancy of approximately ( 2) log bits, where is the dimension of the simplex of probability distributions on the reproduction alphabet.
We present a large-system performance analysis of blind and group-blind multiuser detection methods. In these methods, the receivers are estimated based on the received signal samples. In particular, we assume binary random spreading, and let the spreading gain , the number of users , and the number of received signal samples all go to infinity, while keeping the ratios and fixed. We characterize the asymptotic performance of the direct-matrix inversion (DMI) blind linear minimum mean-square error (MMSE) receiver, the subspace blind linear MMSE receiver, and the group-blind linear hybrid receiver. We first derive the asymptotic average output signal-to-interference-plus-noise ratio (SINR) for each of these receivers. Our results reveal an interesting "saturation" phenomenon: The output SINR of each of these receivers converges to a finite limit as the signal-to-noise ratio (SNR) of the desired user increases, which is in stark contrast to the fact that the output SINR achieved by the exact linear MMSE receiver can get arbitrarily large. This indicates that the capacity of a wireless system with blind or group-blind multiuser receivers is not only interference-limited, but also estimation-error limited. We then show that for both the blind and group-blind receivers, the output residual interference has an asymptotic Gaussian distribution, independent of the realizations of the spreading sequences. The Gaussianity indicates that in a large system, the bit-error rate (BER) is related to the SINR simply through the function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.