“…Of the many that exist, the two most common LDPC decoding algorithms are the Sum-Product Algorithm (SPA) [24] and the Min-Sum Algorithm (MSA) [25].…”
Section: ) Check Node Calculationsmentioning
confidence: 99%
“…Conversely, the dark points in the top-left represent the fully parallel decoders of [67] and [68], which achieve a very high processing throughput by using few bits, few iterations, a large degree of parallelism and operate on the basis of the MSA [25]. The fact that the MSA can facilitate a higher processing throughput than more complicated alternatives such as the SPA [24] is also demonstrated by comparing the results of [75] and [76], which present two very similar designs that vary in algorithm.…”
Section: B Relationships Between Parameters and Each Characteristicmentioning
Abstract-Low-Density Parity Check (LDPC) error correction decoders have become popular in communications systems, as a benefit of their strong error correction performance and their suitability to parallel hardware implementation. A great deal of research effort has been invested into LDPC decoder designs that exploit the flexibility, the high processing speed and the parallelism of Field-Programmable Gate Array (FPGA) devices. FPGAs are ideal for design prototyping and for the manufacturing of small-production-run devices, where their insystem programmability makes them far more cost-effective than Application-Specific Integrated Circuits (ASICs). However, the FPGA-based LDPC decoder designs published in the open literature vary greatly in terms of design choices and performance criteria, making them a challenge to compare. This paper explores the key factors involved in FPGA-based LDPC decoder design and presents an extensive review of the current literature. In-depth comparisons are drawn amongst 140 published designs (both academic and industrial) and the associated performance trade-offs are characterised, discussed and illustrated. Seven key performance characteristics are described, namely their processing throughput, processing latency, hardware resource requirements, error correction capability, processing energy efficiency, bandwidth efficiency and flexibility. We offer recommendations that will facilitate fairer comparisons of future designs, as well as opportunities for improving the design of FPGA-based LDPC decoders.
“…Of the many that exist, the two most common LDPC decoding algorithms are the Sum-Product Algorithm (SPA) [24] and the Min-Sum Algorithm (MSA) [25].…”
Section: ) Check Node Calculationsmentioning
confidence: 99%
“…Conversely, the dark points in the top-left represent the fully parallel decoders of [67] and [68], which achieve a very high processing throughput by using few bits, few iterations, a large degree of parallelism and operate on the basis of the MSA [25]. The fact that the MSA can facilitate a higher processing throughput than more complicated alternatives such as the SPA [24] is also demonstrated by comparing the results of [75] and [76], which present two very similar designs that vary in algorithm.…”
Section: B Relationships Between Parameters and Each Characteristicmentioning
Abstract-Low-Density Parity Check (LDPC) error correction decoders have become popular in communications systems, as a benefit of their strong error correction performance and their suitability to parallel hardware implementation. A great deal of research effort has been invested into LDPC decoder designs that exploit the flexibility, the high processing speed and the parallelism of Field-Programmable Gate Array (FPGA) devices. FPGAs are ideal for design prototyping and for the manufacturing of small-production-run devices, where their insystem programmability makes them far more cost-effective than Application-Specific Integrated Circuits (ASICs). However, the FPGA-based LDPC decoder designs published in the open literature vary greatly in terms of design choices and performance criteria, making them a challenge to compare. This paper explores the key factors involved in FPGA-based LDPC decoder design and presents an extensive review of the current literature. In-depth comparisons are drawn amongst 140 published designs (both academic and industrial) and the associated performance trade-offs are characterised, discussed and illustrated. Seven key performance characteristics are described, namely their processing throughput, processing latency, hardware resource requirements, error correction capability, processing energy efficiency, bandwidth efficiency and flexibility. We offer recommendations that will facilitate fairer comparisons of future designs, as well as opportunities for improving the design of FPGA-based LDPC decoders.
“…The hardware cost is affordable for the bit-serial check-node architectures presented in [6] and [7], nevertheless, not acceptable for parallel check-node operations, where this search should be completed in a single clock cycle. In order to reduce the complexity, the svwMS algorithm divides the input messages into two groups, and the minimum values of these groups are compared [9]. If these values are equal, the second minimum is considered equal to min 1 .…”
Section: B Area Complexity and Speed Performancementioning
confidence: 99%
“…This approach has been used by Zhang et al to design a flexible multi-gigabit shift-LDPC decoder [8]. Angarita et al analysed the behavior of the two minima on each iteration of the MS decoding algorithm [9]. Motivated by the observation that these values and their difference increase in every new iteration, they proposed the use of variable iteration-dependent correction factors.…”
Section: Introductionmentioning
confidence: 99%
“…The introduced variable-weight algorithm (vwMS) exhibits better error-correction performance than the smMS and the mfMS algorithms. Moreover, authors in [9] proposed a simplified vwMS algorithm, called svwMS, that reduces the high computational cost required to determine if more than one inputs share the same minimum value.…”
This paper introduces algorithms and the corresponding circuits that identify minimum values from among a set of incoming messages. The problem of finding the two minima in a set of messages is approximated by the different problem of finding the minimum of all messages in the set and the second minimum among a subset of the messages. This approximation is here shown to be suitable for hardware LDPC decoders that implement the Min-Sum decoding algorithm and its variations. The introduced approximation simplifies the operation performed in the check-node processor and leads to hardware reduction. The proposed schemes outperform other state-of-the-art simplified MS architectures, approaching the error-corrective performance of the NMS decoding algorithm.
Summary
Information theory coding is an impressive and most celebrated field of research that has spawned numerous extremely important solutions to the intractable problems of secure data communications. Recent advancements in error control coding methods have seen a huge surge in using low‐density parity‐check (LDPC) code‐based decoding algorithms to solve imperative issues related to reliable data transmission and reception. Till date, extensive research efforts have been consistently being made on LDPC codes which focus on algorithm‐driven and hardware‐realization‐based approaches. The main intension of this research work is to provide an extensive systematic elucidation on the recent advancements in LDPC decoding algorithms. In addition, a thorough performance evaluation and analysis of several outstanding LDPC decoding techniques is presented. Finally, conclusions are drawn by summarizing the important research findings, interesting open problems, current challenges and broader perspectives for future directions of research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.