Lattice coding for signals and networks : a structured coding approach to quantization, modulation, and multiuser information theory / Ram Zamir, Tel Aviv University. pages cm Includes bibliographical references and index. ISBN 978-0-521-76698-2 (hardback) 1. Coding theory. 2. Signal processing-Mathematics. 3. Lattice theory. I. Title. TK5102.92.Z357 2014 003 ′ .54-dc23 2014006008 ISBN 978-0-521-76698-2 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. To my parents Eti and Sasson Zamir Contents Preface page xiii Acknowledgements xv List of notation xviii 4 Dithering and estimation 4.1 Crypto lemma 4.2 Generalized dither viii Contents 4.3 White dither spectrum 4.4 Wiener estimation 4.5 Filtered dithered quantization Summary Problems Historical notes 5 Entropy-coded quantization 5.1 The Shannon entropy 5.2 Quantizer entropy 5.3 Joint and sequential entropy coding* 5.4 Entropy-distortion trade-off 5.5 Redundancy over Shannon 5.6 Optimum test-channel simulation 5.7 Comparison with Lloyd's conditions 5.8 Is random dither really necessary? 5.9 Universal quantization* Summary Problems Historical notes 6 Infinite constellation for modulation 6.1 Rate per unit volume 6.2 ML decoding and error probability 6.3 Gap to capacity 6.4 Non-AWGN and mismatch 6.5 Non-equiprobable signaling 6.6 Maximum a posteriori decoding* Summary Problems Historical notes 7 Asymptotic goodness 7.1 Sphere bounds 7.2 Sphere-Gaussian equivalence 7.3 Good covering and quantization 7.4 Does packing imply modulation? 7.5 The Minkowski-Hlawka theorem 7.6 Good packing 7.7 Good modulation Contents ix 7.8 Non-AWGN 7.9 Simultaneous goodness Summary Problems Historical notes 8 Nested lattices 8.1 Definition and properties 8.2 Cosets and Voronoi codebooks 8.3 Nested linear, lattice and trellis codes 8.4 Dithered codebook 8.5 Good nested lattices Summary Problems Historical notes 9 Lattice shaping 9.1 Voronoi modulation 9.2 Syndrome dilution scheme 9.3 The high SNR case 9.4 Shannon meets Wiener (at medium SNR) 9.5 The mod channel 9.6 Achieving C AWGN for all SNR 9.7 Geometric interpretation 9.8 Noise-matched decoding 9.9 Is the dither really necessary? 9.10 Voronoi quantization Summary Problems Historical notes 10 Side-information problems 10.1 Syndrome coding 10.2 Gaussian multi-terminal problems 10.3 Rate distortion with side information 10.4 Lattice Wyner-Ziv coding 10.5 Channels with side information 10.6 Lattice dirty-paper coding Summary x Contents Problems Historical notes 11 Modulo-lattice modulation 11.1 Separation versus JSCC 11.2 Figures of merit for JSCC 11.3 Joint Wyner-Ziv/dirty-paper coding 11.4 Bandwidth conversion Summary Problems Historical notes 12 Gaussian networks 12.1 The two-help-one problem 12.2 Dirty multiple-access channel 12.3 Lattice network coding 12.4 Interference alignment 12.5 Summary and outlook Summar...
In this work we investigate the behavior of the minimal rate needed in order to guarantee a given probability that the distortion exceeds a prescribed threshold, at some fixed finite quantization block length. We show that the excess coding rate above the rate-distortion function is inversely proportional (to the first order) to the square root of the block length. We give an explicit expression for the proportion constant, which is given by the inverse Q-function of the allowed excess distortion probability, times the square root of a constant, termed the excess distortion dispersion. This result is the dual of a corresponding channel coding result, where the dispersion above is the dual of the channel dispersion. The work treats discrete memoryless sources, as well as the quadratic-Gaussian case.
In this work we investigate the behavior of the distortion threshold that can be guaranteed in joint sourcechannel coding, to within a prescribed excess-distortion probability. We show that the gap between this threshold and the optimal average distortion is governed by a constant that we call the joint source-channel dispersion. This constant can be easily computed, since it is the sum of the source and channel dispersions, previously derived. The resulting performance is shown to be better than that of any separation-based scheme. For the proof, we use unequal error protection channel coding, thus we also evaluate the dispersion of that setting.
Abstract-The "water-filling" solution for the quadratic ratedistortion function of a stationary Gaussian source is given in terms of its power spectrum. This formula naturally lends itself to a frequency domain "test-channel" realization. We provide an alternative time-domain realization for the rate-distortion function, based on linear prediction. The predictive test-channel has some interesting implications, including the optimality at all distortion levels of pre/post filtered vector-quantized differential pulse code modulation (DPCM), and a duality relationship with decisionfeedback equalization (DFE) for inter-symbol interference (ISI) channels.Keywords: Test channel, water-filling, pre/post-filtering, DPCM, Shannon lower bound, ECDQ, directed-information, equalization, MMSE estimation, decision feedback. I. INTRODUCTIONThe water-filling solution for the quadratic rate-distortion function R(D) of a stationary Gaussian source is given in terms of the spectrum of the source. Similarly, the capacity C of a power-constrained ISI channel with Gaussian noise is given by a water-filling solution relative to the effective noise spectrum. Both these formulas amount to limiting values of mutual-information between vectors in the frequency domain. In contrast, linear prediction along the time domain can translate these vector mutual-information quantities into scalar ones. Indeed, for capacity, Cioffi et al [4] showed that C is equal to the scalar mutual-information over a slicer embedded in a decision-feedback noise-prediction loop.We show that a parallel result holds for the rate-distortion function: R(D) is equal to the scalar mutual-information over an additive white Gaussian noise (AWGN) channel embedded in a source prediction loop, as shown in Figure 1. This result implies that R(D) can essentially be realized in a sequential manner (as will be clarified later), and it joins other observations regarding the role of minimum mean-square error (MMSE) estimation in successive encoding and decoding of Gaussian channels and sources [7], [6], [3].
The "water-filling" solution for the quadratic ratedistortion function of a stationary Gaussian source is given in terms of its power spectrum. This formula naturally lends itself to a frequency domain "test-channel" realization. We provide an alternative time-domain realization for the rate-distortion function, based on linear prediction. This solution has some interesting implications, including the optimality at all distortion levels of pre/post filtered vector-quantized differential pulse code modulation (DPCM), and a duality relationship with decisionfeedback equalization (DFE) for inter-symbol interference (ISI) channels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.