In recent years, iterative processing techniques with soft-in/soft-out (SISO) components have received considerable attention. Such techniques, based on the so-called turbo principle, are exemplified through turbo decoding, turbo equalization and turbo multiuser detection. In this paper, turbo multiuser detection is applied to a discrete multitone (DMT) very-high-rate digital subscriber line (VDSL) system to combat crosstalk signals and to obtain substantial coding gain. The proposed iterative DMT receiver is shown to achieve an overall 7.0 dB gain over the uncoded optimum receiver at a bit error rate of 7 10 − for a channel with severe intersymbol interference and additive white Gaussian noise and with one dominant crosstalk signal. Impulse noise is detrimental to the proposed scheme but can be overcome through erasure decoding techniques, as is shown by example.
The problem of blind detection in a synchronous code division multiple access (CDMA) system when there is no knowledge of the users' spreading sequences is considered. An expectation maximization (EM)-based algorithm that exploits the finite alphabet (FA) property of the digital communications source is proposed. Simulations indicate that this approach, which makes use of knowledge of the subspace spanned by the signaling multiplex, achieves the Cramér-Rao lower bound (CRB). The issues of subspace estimation and timing acquisition are also considered.Index Terms-CDMA, code-free demodulation, EM algorithm.
Despite the great potential of edge artificial intelligence (AI) which is the convergence of edge computing and AI, it acquires sufficiently large/diverse datasets and requires high energy consumption for model training on resource-constrained edge devices, hence hindering the application of edge AI at edge devices. This paper proposes a lead federated neuromorphic learning (LFNL) technique, which is a decentralized energy-efficient brain-inspired computing method, enabling edge devices to collaboratively train a global model while preserving privacy. Experimental results validate that LFNL substantially reduces the data traffic by >3.5× and computational latency by >2.0× compared to centralized learning, with a comparable classification accuracy, as well as significantly outperforms local learning with uneven dataset distribution among edge devices. Meanwhile, LFNL significantly reduces the energy consumption by >4.5× compared to standard federated learning with a slight accuracy loss up to 1.5%. Therefore, the newly proposed LFNL can facilitate the development of brain-inspired computing and edge AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.