Feldman et al. (IEEE Trans. Inform. Theory, Mar. 2005) showed that linear programming (LP) can be used to decode linear error correcting codes. The bit-error-rate performance of LP decoding is comparable to state-of-the-art BP decoders, but has significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a nearly linear time algorithm for two-norm projection onto the parity polytope. This allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently.
Feldman et al. (IEEE Trans. Inform. Theory, Mar. 2005) showed that linear programming (LP) can be used to decode linear error correcting codes. The bit-error-rate performance of LP decoding is comparable to state-of-the-art BP decoders, but has significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a nearly linear time algorithm for two-norm projection onto the parity polytope. This allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently.
Coded computation is a method to mitigate "stragglers" in distributed computing systems through the use of error correction coding that has lately received significant attention. First used in vector-matrix multiplication, the range of application was later extended to include matrix-matrix multiplication, heterogeneous networks, convolution, and approximate computing. A drawback to previous results is they completely ignore work completed by stragglers. While stragglers are slower compute nodes, in many settings the amount of work completed by stragglers can be non-negligible. Thus, in this work, we propose a hierarchical coded computation method that exploits the work completed by all compute nodes. We partition each node's computation into layers of sub-computations such that each layer can be treated as (distinct) erasure channel. We then design different erasure codes for each layer so that all layers have the same failure exponent. We propose design guidelines to optimize parameters of such codes. Numerical results show the proposed scheme has an improvement of a factor of 1.5 in the expected finishing time compared to previous work.
Abstract-We develop coding strategies for estimation under communication constraints in tree-structured sensor networks. The strategies have a modular and decentralized architecture. This promotes the flexibility, robustness, and scalability that wireless sensor networks need to operate in uncertain, changing, and resource-constrained environments. The strategies are based on a generalization of Wyner-Ziv source coding with decoder side information. We develop solutions for general trees, and illustrate our results in serial (pipeline) and parallel (hub-and-spoke) networks. Additionally, the strategies can be applied to other network information theory problems. They have a successive coding structure that gives an inherently less complex way to attain a number of prior results, as well as some novel results, for the Chief Executive Officer problem, multiterminal source coding, and certain classes of relay channels.
Linear programming (LP) decoding for low-density parity-check (LDPC) codes proposed by Feldman et al. is shown to have theoretical guarantees in several regimes and empirically is not observed to suffer from an error floor. However at low signal-to-noise ratios (SNRs), LP decoding is observed to have worse error performance than belief propagation (BP) decoding. In this paper, we seek to improve LP decoding at low SNRs while still achieving good high SNR performance. We first present a new decoding framework obtained by trying to solve a nonconvex optimization problem using the alternating direction method of multipliers (ADMM). This non-convex problem is constructed by adding a penalty term to the LP decoding objective. The goal of the penalty term is to make "pseudocodewords", which are the non-integer vertices of the LP relaxation to which the LP decoder fails, more costly. We name this decoder class the "ADMM penalized decoder". In our simulation results, the ADMM penalized decoder with ℓ 1 and ℓ 2 penalties outperforms both BP and LP decoding at all SNRs. For high SNR regimes where it is infeasible to simulate, we use an instanton analysis and show that the ADMM penalized decoder has better high SNR performance than BP decoding. We also develop a reweighted LP decoder using linear approximations to the objective with an ℓ 1 penalty. We show that this decoder has an improved theoretical recovery threshold compared to LP decoding. In addition, we show that the empirical gain of the reweighted LP decoder is significant at low SNRs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.