Focus of this letter is the oldest class of codes\ud that can approach the Shannon limit quite closely, i.e., lowdensity\ud parity-check (LDPC) codes, and two mathematical tools\ud that can make their design an easier job under appropriate\ud assumptions. In particular, we present a simple algorithmic\ud method to estimate the threshold for regular and irregular LDPC\ud codes on memoryless binary-input continuous-output AWGN\ud channels with sum-product decoding, and, to determine how close\ud are the obtained thresholds to the theoretical maximum, i.e., to\ud the Shannon limit, we give a simple and invertible expression\ud of the AWGN channel capacity in the binary input - soft output\ud case. For these codes, the thresholds are defined as the maximum\ud noise level such that an arbitrarily small bit-error probability\ud can be achieved as the block length tends to infinity. We assume\ud a Gaussian approximation for message densities under density\ud evolution, a widely used simplification of the decoding algorithm
The accuracy of forward models for electroencephalography (EEG) partly depends on head tissues geometry and strongly affects the reliability of the source reconstruction process, but it is not yet clear which brain regions are more sensitive to the choice of different model geometry. In this paper we compare different spherical and realistic head modeling techniques in estimating EEG forward solutions from current dipole sources distributed on a standard cortical space reconstructed from Montreal Neurological Institute (MNI) MRI data. Computer simulations are presented for three different four-shell head models, two with realistic geometry, either surface-based (BEM) or volume-based (FDM), and the corresponding sensor-fitted spherical-shaped model. Point Spread Function (PSF) and Lead Field (LF) cross-correlation analyses were performed for 26 symmetric dipole sources to quantitatively assess models' accuracy in EEG source reconstruction. Realistic geometry turns out to be a relevant factor of improvement, particularly important when considering sources placed in the temporal or in the occipital cortex.
EEG-based source localization techniques use scalp-potential data to estimate the location of underlying neural activity. EEG source location reconstruction requires the assumption of a source model and the assumption of a conductive head model. Brain lesions can present conductivity values that are dramatically different from those of surrounding normal tissues and have to be included in head models for accurate neural source reconstruction. It is therefore necessary to analyze subjects' anatomic images (using MRI or computed tomography) to identify lesion type and to assign the appropriate conductivity value. Source localization accuracy may be influenced by uncertainties in tissue conductivity assignment during head model construction. The authors present a sensitivity study quantifying the effect of uncertainty in brain lesion conductivity assignment on EEG dipole source localization. They adopted an eccentric-spheres head model in which an eccentric bubble approximated the effects of actual brain lesions. After simulating EEG signal measurement in 64 different pathologic situations, an inverse dipole fitting procedure was carried out, assuming an incorrect lesion conductivity assignment ranging from a half to twice the real value. Incorrect lesion conductivity assignment led to markedly wrong source reconstruction for highly conductive lesions like liquid-filled ones (localization errors as much as 1.7 cm). Conversely, low sensitivity to uncertainties in conductivity assignment was found for lesions with low conductivity like calcified tumors. The authors propose a method based on residual error analysis to improve the lesion conductivity estimate. This procedure can identify lesion tissue conductivity with only a few percent error and guarantees source localization errors less than 5 mm.
Since irregular low-density parity-check (LDPC) codes are known to perform better than regular ones, and to exhibit, like them, the so called "threshold phenomenon", this letter investigates a low complexity upper bound on belief-propagation decoding thresholds for this class of codes on memoryless BI-AWGN (Binary Input -Additive White Gaussian Noise) channels, with sum-product decoding. We use a simplified analysis of the belief-propagation decoding algorithm, i.e., consider a Gaussian approximation for message densities under density evolution, and a simple algorithmic method, defined recently, to estimate the decoding thresholds for regular and irregular LDPC codes.Introduction: As first noticed by Gallager in his introductory work to regular LDPC codes [1], these exhibit the so called "threshold phenomenon". Namely, an upper bound for the channel noise can be defined by the noise threshold so that, if the channel noise is maintained below this threshold, the probability of lost information can be made as small as desired. Later it was shown in [2] that irregular LDPC codes perform better than regular ones, and exhibit this phenomenon, too.LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum (the Shannon limit) for a symmetric memoryless channel. Thus, the problem of an easy evaluation of the threshold, and, in general, of the performance of belief propagation decoding (see, e.g., [3] and [4]) is important to allow the design of capacity-approaching codes, based on noise threshold maximization.Maximum Likelihood decoding of LDPC codes is in general not feasible [3]. Instead, Gallager proposed an iterative soft decoding algorithm, also called belief propagation [5]. Gallager also noted that, for any given channel conditions, it is possible to evaluate the performance of belief propagation by following the evolution of the distribution of the messages. This idea was extended in [6], where it was shown how to apply density evolution efficiently. One difficulty encountered when applying density evolution is given by the continuous nature of the messages which makes them hard to analyze. As an alternative, in [7] a Gaussian approximation for the message distribution was proposed, reducing the evolution of the infinite dimensional density space to the evolution of a single parameter. In this way, the mean value of a generic check node output message at the l-th iteration is simply described as a function of the check node output message mean value at the (l − 1)-th iteration, thus obtaining a recurrent sequence. With this simplified description, the threshold can be calculated as the last value such that the recurrent sequence converges but no mathematical methods were provided in [7] to determine it.In [8] it was presented a mathematical method to allow the noise thresholds evaluation using the quadratic degeneracy theory, thus transforming a recurrence relation convergence problem in a problem of ma...
Recently a powerful class of rate-compatible serially concatenated convolutional codes (SCCCs) have been proposed based on minimizing analytical upper bounds on the error probability in the error floor region. Here this class of codes is further investigated by combining analytical upper bounds with extrinsic information transfer charts analysis. Following this approach, we construct a family of rate-compatible SCCCs with good performance in both the error floor and the waterfall regions over a broad range of code rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.