Background: Computer-aided diagnosis can facilitate the early detection and diagnosis of multiple sclerosis (MS) thus enabling earlier interventions and a reduction in long-term MS-related disability. Recent advancements in the field of artificial intelligence (AI) have led to the improvements in the classification, quantification and identification of diagnostic patterns in medical images for a range of diseases, in particular, for MS. Importantly, data generated using AI techniques are analyzed automatically, which compares favourably with labour-intensive and time-consuming manual methods. Objective: The aim of this review is to assist MS researchers to understand current and future developments in the AI-based diagnosis and prognosis of MS. Methods: We will investigate a variety of AI approaches and various classifiers and compare the current state-of-the-art techniques in relation to lesion segmentation/detection and prognosis of disease. After briefly describing the magnetic resonance imaging (MRI) techniques commonly used, we will describe AI techniques used for the detection of lesions and MS prognosis. Results: We then evaluate the clinical maturity of these AI techniques in relation to MS. Conclusion: Finally, future research challenges are identified in a bid to encourage further improvements of the methods.
3D face reconstruction is considered to be a useful computer vision tool, though it is difficult to build. This paper proposes a 3D face reconstruction method, which is easy to implement and computationally efficient. It takes a single 2D image as input, and gives 3D reconstructed images as output. Our method primarily consists of three main steps: feature extraction, depth calculation, and creation of a 3D image from the processed image using a Basel face model (BFM). First, the features of a single 2D image are extracted using a two-step process. Before distinctive-features extraction, a face must be detected to confirm whether one is present in the input image or not. For this purpose, facial features like eyes, nose, and mouth are extracted. Then, distinctive features are mined by using scale-invariant feature transform (SIFT), which will be used for 3D face reconstruction at a later stage. Second step comprises of depth calculation, to assign the image a third dimension. Multivariate Gaussian distribution helps to find the third dimension, which is further tuned using shading cues that are obtained by the shape from shading (SFS) technique. Thirdly, the data obtained from the above two steps will be used to create a 3D image using BFM. The proposed method does not rely on multiple images, lightening the computation burden. Experiments were carried out on different 2D images to validate the proposed method and compared its performance to those of the latest approaches. Experiment results demonstrate that the proposed method is time efficient and robust in nature, and it outperformed all of the tested methods in terms of detail recovery and accuracy.
Low-density parity-check (LDPC) codes have become the focal choice for next-generation Internet of things (IoT) networks. This correspondence proposes an efficient decoding algorithm, dual min-sum (DMS), to estimate the first two minima from a set of variable nodes for check-node update (CNU) operation of min-sum (MS) LDPC decoder. The proposed architecture entirely eliminates the large-sized multiplexing system of sorting-based architecture which results in a prominent decrement in hardware complexity and critical delay. Specifically, the DMS architecture eliminates a large number of comparators and multiplexors while keeping the critical delay equal to the most delay-efficient tree-based architecture. Based on experimental results, if the number of inputs is equal to 64, the proposed architecture saves 69%, 68%, and 52% area over the sorting-based, the tree-based, and the low-complexity tree-based architectures, respectively. Furthermore, the simulation results show that the proposed approach provides an excellent error-correction performance in terms of bit error rate (BER) and block error rate (BLER) over an additive white Gaussian noise (AWGN) channel.
Abstract. In this paper, we target enhanced 3D reconstruction of non-rigidly deforming objects based on a view-independent surface representation with an automated recursive filtering scheme. This work improves upon the KinectDeform algorithm which we recently proposed. KinectDeform uses an implicit viewdependent volumetric truncated signed distance function (TSDF) based surface representation. The view-dependence makes its pipeline complex by requiring surface prediction and extraction steps based on camera's field of view. This paper proposes to use an explicit projection-based Moving Least Squares (MLS) surface representation from point-sets. Moreover, the empirical weighted filtering scheme in KinectDeform is replaced by an automated fusion scheme based on a Kalman filter. We analyze the performance of the proposed algorithm both qualitatively and quantitatively and show that it is able to produce enhanced and feature preserving 3D reconstructions.
Chromatic dispersion also called group velocity dispersion (GVD), is a very serious issue for communication engineers to design a wavelength division multiplexing (WDM) system. The pulse gets broader with increasing length of fiber due to chromatic dispersion and this pulse broadening effect stimulates the adjacent bit periods to overlap which causes inter symbol interference (ISI). The chromatic dispersion also has inverse relation with Four wave mixing (FWM) and Four wave mixing is minimum when chromatic dispersion is at highest level. The pulse broadening is a function of length of the fiber which has been shown in this work through simulation results for 40 Gbps fiber optics system at various lengths (50-200 km) using standard single mode fiber (SMF) in Optisystem software. In this work, DCB (dispersion compensation bank) has been used to reduce the effect of chromatic dispersion and it has been shown that the dispersion effect get mitigates more effectively by using DCB. Also, BER analysis has been shown through simulation at various power levels (5-20 dBm) and it has been shown that BER increases by increasing input launch power.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.