Abstract-The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods.
Abstract-For a stationary additive Gaussian-noise channel with a rational noise power spectrum of a finite-order L, we derive two new results for the feedback capacity under an average channel input power constraint. First, we show that a very simple feedback-dependent Gauss-Markov source achieves the feedback capacity, and that Kalman-Bucy filtering is optimal for processing the feedback. Based on these results, we develop a new method for optimizing the channel inputs for achieving the Cover-Pombra block-length-n feedback capacity by using a dynamic programming approach that decomposes the computation into n sequentially identical optimization problems where each stage involves optimizing O(L 2 ) variables. Second, we derive the explicit maximal information rate for stationary feedback-dependent sources. In general, evaluating the maximal information rate for stationary sources requires solving only a few equations by simple non-linear programming. For first-order autoregressive and/or moving average (ARMA) noise channels, this optimization admits a closed form maximal information rate formula. The maximal information rate for stationary sources is a lower bound on the feedback capacity, and it equals the feedback capacity if the long-standing conjecture, that stationary sources achieve the feedback capacity, holds.Index Terms-channel capacity, directed information, dynamic programming, feedback capacity, Gauss-Markov source, information rate, intersymbol interference, Kalman-Bucy filter, linear Gaussian noise channel, noise whitening filter
Abstract-We study the limits of performance of Gallager codes (low-density parity-check (LDPC) codes) over binary linear intersymbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum-product message-passing detector/decoder, we prove two error concentration theorems. Our proofs expand on previous work by handling complications introduced by the channel memory. We circumvent these problems by considering not just linear Gallager codes but also their cosets and by distinguishing between different types of message flow neighborhoods depending on the actual transmitted symbols. We compute the noise tolerance threshold using a suitably developed density evolution algorithm and verify, by simulation, that the thresholds represent accurate predictions of the performance of the iterative sum-product algorithm for finite (but large) block lengths. We also demonstrate that for high rates, the thresholds are very close to the theoretical limit of performance for Gallager codes over ISI channels. If C denotes the capacity of a binary ISI channel and if C i i d denotes the maximal achievable mutual information rate when the channel inputs are independent and identically distributed (i.i.d.) binary random variables (C i i d C), we prove that the maximum information rate achievable by the sum-product decoder of a Gallager (coset) code is upper-bounded by C i i d . The last topic investigated is the performance limit of the decoder if the trellis portion of the sum-product algorithm is executed only once; this demonstrates the potential for trading off the computational requirements and the performance of the decoder.Index Terms-Bahl-Cocke-Jelinek-Raviv (BCJR)-once bound, channel capacity, density evolution, Gallager codes, independent and identically distributed (i.i.d.) capacity, intersymbol interference (ISI) channel, low-density parity-check (LDPC) codes, sumproduct algorithm, turbo equalization.
Abstract-The paper considers the inversion of full matrices whose inverses are -banded. We derive a nested inversion algorithm for such matrices. Applied to a tridiagonal matrix, the algorithm provides its explicit inverse as an element-wise product (Hadamard product) of three matrices. When related to Gauss-Markov random processes (GMrp), this result provides a closed-form factored expression for the covariance matrix of a first-order GMrp. This factored form leads to the interpretation of a first-order GMrp as the product of three independent processes: a forward independent-increments process, a backward independent-increments process, and a variance-stationary process. We explore the nonuniqueness of the factorization and design it so that the forward and backward factor processes have minimum energy.We then consider the issue of approximating general nonstationary Gaussian processes by Gauss-Markov processes under two optimality criteria: the Kullback-Leibler distance and maximum entropy. The problem reduces to approximating general covariances by covariance matrices whose inverses are banded. Our inversion result is an efficient algorithmic solution to this problem. We evaluate the information loss between the original process and its Gauss-Markov approximation.
Abstract-In this work we present a low-complexity implementation of Chase-type decoding of Reed-Solomon Codes. In such, we first use the soft-information available at the channel output to construct a test-set of 2 η vectors, equivalent in all except the η << n least reliable coordinate positions. We then give an interpolation procedure to construct a set of 2 η bivariate polynomials, with the roots of each specified by its corresponding test-vector. Here, test-vector similarity is exploited to share much of the required computation. Finally, we obtain the candidate message from the single z-linear factor of each bivariate polynomial. Although we provide an expression for the direct computation of each candidate message, the complexity of repeating this computation for each interpolation polynomial is prohibitive. We, thus, also present a reduced-complexity factorization (RCF) method to select a single polynomial that, with high probability, contains the correctly decoded message in its z-linear factor. Although suboptimal, the loss in performance of RCF decreases rapidly with increasing code length. We provide extensive simulation results showing that a significant performance increase over traditional hard-decision decoding is achievable with a comparable computational complexity (as implemented with the BerlekampMassey Algorithm).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.