There are many sources of interference encountered in the electroencephalogram (EEG) recordings, specifically ocular, muscular, and cardiac artifacts. Rejection of EEG artifacts is an essential process in EEG analysis since such artifacts cause many problems in EEG signals analysis. One of the most challenging issues in EEG denoising processes is removing the ocular artifacts where Electrooculographic (EOG), and EEG signals have an overlap in both frequency and time domains. In this paper, we build and train a deep learning model to deal with this challenge and remove the ocular artifacts effectively. In the proposed scheme, we convert each EEG signal to an image to be fed to a U-NET model, which is a deep learning model usually used in image segmentation tasks. We proposed three different schemes and made our U-NET based models learn to purify contaminated EEG signals similar to the process used in the image segmentation process. The results confirm that one of our schemes can achieve a reliable and promising accuracy to reduce the Mean square error between the target signal (Pure EEGs) and the predicted signal (Purified EEGs).
Time-lapse microscopy can directly capture the dynamics and heterogeneity of cellular processes at the single-cell level. Successful application of single-cell live microscopy requires automated segmentation and tracking of hundreds of individual cells over several time points. Recently, deep learning models have ushered in a new era in quantitative analysis of microscopy images. This work presents a versatile and trainable deep-learning-based software, termed DeepSea, that allows for both segmentation and tracking of single cells and their nuclei in sequences of phase-contrast live microscopy images. We show that DeepSea can quantify several cell biological features of mouse embryonic stem cells, such as cell division cycle, mitosis, cell morphology, and cell size, with high precision using phase-contrast images. Using DeepSea, we were able to show that despite their ultrafast cell division cycle, mouse embryonic stem cells exhibit cell size control in the G1 phase of the cell cycle.
Mild traumatic brain injury (mTBI) is a major public health concern that can result in a broad spectrum of short-term and long-term symptoms. Recently, machine learning (ML) algorithms have been used in neuroscience research for diagnostics and prognostic assessment of brain disorders. The present study aimed to develop an automatic classifier to distinguish patients suffering from chronic mTBI from healthy controls (HCs) utilizing multilevel metrics of resting-state functional magnetic resonance imaging (rs-fMRI). Sixty mTBI patients and forty HCs were enrolled and allocated to training and testing datasets with a ratio of 80:20. Several rs-fMRI metrics including fractional amplitude of low-frequency fluctuation (fALFF), regional homogeneity (ReHo), degree centrality (DC), voxel-mirrored homotopic connectivity (VMHC), functional connectivity strength (FCS), and seed-based FC were generated from two main analytical categories: local measures and network measures. Statistical two-sample t-test was employed comparing between mTBI and HCs groups. Then, for each rs-fMRI metric the features were selected extracting the mean values from the clusters showing significant differences. Finally, the support vector machine (SVM) models based on separate and multilevel metrics were built and the performance of the classifiers were assessed using five-fold cross-validation and via the area under the receiver operating characteristic curve (AUC). Feature importance was estimated using Shapley additive explanation (SHAP) values. Among local measures, the range of AUC was 86.67–100% and the optimal SVM model was obtained based on combined multilevel rs-fMRI metrics and DC as a separate model with AUC of 100%. Among network measures, the range of AUC was 80.42–93.33% and the optimal SVM model was obtained based on the combined multilevel seed-based FC metrics. The SHAP analysis revealed the DC value in the left postcentral and seed-based FC value between the motor ventral network and right superior temporal as the most important local and network features with the greatest contribution to the classification models. Our findings demonstrated that different rs-fMRI metrics can provide complementary information for classifying patients suffering from chronic mTBI. Moreover, we showed that ML approach is a promising tool for detecting patients with mTBI and might serve as potential imaging biomarker to identify patients at individual level.Clinical trial registration[clinicaltrials.gov], identifier [NCT03241732].
Super-resolution imaging (S.R.) is a series of techniques that enhance the resolution of an imaging system, especially in surveillance cameras where simplicity and low cost are of great importance. S.R. image reconstruction can be viewed as a three-stage process: image interpolation, image registration, and fusion. Image interpolation is one of the most critical steps in the S.R. algorithms and has a significant influence on the quality of the output image. In this paper, two hardware-efficient interpolation methods are proposed for these platforms, mainly for the mobile application. Experiments and results on the synthetic and real image sequences clearly validate the performance of the proposed scheme. They indicate that the proposed approach is practically applicable to real-world applications. The algorithms are implemented in a Field Programmable Gate Array (FPGA) device using a pipelined architecture. The implementation results show the advantages of the proposed methods regarding area, performance, and output quality.
Wireless Capsule Endoscopy (WCE) presented in 2001 as one of the key approaches to observe the entire gastrointestinal (GI) tract, generally the small bowels. It has been used to detect diseases in the gastrointestinal tract. Endoscopic image analysis is still a required field with many open problems. The quality of many images it produced is rather unacceptable due to the nature of this imaging system, which causes some issues to prognosticate by physicians and computer-aided diagnosis. In this paper, a novel technique is proposed to improve the quality of images captured by the WCE. More specifically, it enhanced the brightness, contrast, and preserve the color information while reducing its computational complexity. Furthermore, the experimental results of PSNR and SSIM confirm that the error rate in this method is near to the ground and negligible. Moreover, the proposed method improves intensity restricted average local entropy (IRMLE) by 22%, color enhancement factor (CEF) by 10%, and can keep the lightness of image effectively. The performances of our method have better visual quality and objective assessments in compare to the state-of-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.