Guessing Random Additive Noise Decoding (GRAND) can, unusually, decode any forward error correction block code. The original algorithm assumed that the decoder received only hard decision demodulated to inform its decoding.As the incorporation of soft information is known to improve decoding precision, here we introduce Ordered Reliability Bits GRAND, that, for binary block code of length n, avails of no more than log 2 (n) bits of code-book-independent quantized soft detection information per received bit to determine an accurate decoding. ORBGRAND is shown to provide better block error rate performance than CA-SCL, a state of the art CA-Polar decoder, with low complexity. Random Linear Codes of the same rate, decoded with ORBGRAND, are shown to have comparable block-error and complexity performance.
We introduce a new algorithm for Maximum Likelihood (ML) decoding for channels with memory. The algorithm is based on the principle that the receiver rank orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in an element of the code-book is the ML decoding. In contrast to traditional approaches, this novel scheme has the desirable property that it becomes more efficient as the code-book rate increases. We establish that the algorithm is capacity achieving for randomly selected code-books. When the code-book rate is less than capacity, we identify asymptotic error exponents as the block length becomes large. When the code-book rate is beyond capacity, we identify asymptotic success exponents. We determine properties of the complexity of the scheme in terms of the number of computations the receiver must perform per block symbol. Worked examples are presented for binary memoryless and Markovian noise. These demonstrate that block-lengths that offer a good complexity-rate tradeoff are typically smaller than the reciprocal of the bit error rate.
This paper is twofold. In the first part, we present a refinement of the Rényi Entropy Power Inequality (EPI) recently obtained in [11]. The proof largely follows the approach in [18] of employing Young's convolution inequalities with sharp constants. In the second part, we study the reversibility of the Rényi EPI, and confirm a conjecture in [5,24] in two cases. Connections with various p-th mean bodies in convex geometry are also explored.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.