Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time‐consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k‐means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time-consuming, and vulnerable to error. Hence, automated computer-assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi-classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, Den-seNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy.Two processes of transfer learning: freeze and fine-tune, are performed to extract significant features from MRI slices. Brain tumor multi-classification is performed using transfer learning, ResNet50-UNet, and NASNet architecture.
Automatic and precise segmentation and classification of tumor area in medical images is still a challenging task in medical research. Most of the conventional neural network based models usefully connected or convolutional neural networks to perform segmentation and classification. In this research, we present deep learning models using long short term memory (LSTM) and convolutional neural networks (ConvNet) for accurate brain tumor delineation from benchmark medical images. The two different models, that is, ConvNet and LSTM networks are trained using the same data set and combined to form an ensemble to improve the results. We used publicly available MICCAI BRATS 2015 brain cancer data set consisting of MRI images of four modalities T1, T2, T1c, and FLAIR. To enhance the quality of input images, multiple combinations of preprocessing methods such as noise removal, histogram equalization, and edge enhancement are formulated and best performer combination is applied. To cope with the class imbalance problem, class weighting is used in proposed models. The trained models are tested on validation data set taken from the same image set and results obtained from each model are reported. The individual score (accuracy) of ConvNet is found 75% whereas for LSTM based network produced 80% and ensemble fusion produced 82.29% accuracy.
In the modern world, wearable smart devices are continuously used to monitor people’s health. This study aims to develop an automatic mental stress detection system for researchers based on Electrocardiogram (ECG) signals from smart T-shirts using machine learning classifiers. We used 20 subjects, including 10 from mental stress (after twelve hours of continuous work in the laboratory) and 10 from normal (after completing the sleep or without any work). We also applied three scoring techniques: Chalder Fatigue Scale (CFS), Specific Fatigue Scale (SFS), Depression, Anxiety, and Stress Scale (DASS), to confirm the mental stress. The total duration of ECG recording was 1800 min, including 1200 min during mental stress and 600 min during normal. We calculated two types of features, such as demographic and extracted by ECG signal. In addition, we used Decision Tree (DT), Naive Bayes (NB), Random Forest (RF), and Logistic Regression (LR) to classify the intra-subject (mental stress and normal) and inter-subject classification. The DT leave-one-out model has better performance in terms of recall (93.30%), specificity (96.70%), precision (94.40%), accuracy (93.30%), and F1 (93.50%) in the intra-subject classification. Additionally, The classification accuracy of the system in classifying inter-subjects is 94.10% when using a DT classifier. However, our findings suggest that the wearable smart T-shirt based on the DT classifier may be used in big data applications and health monitoring. Mental stress can lead to mitochondrial dysfunction, oxidative stress, blood pressure, cardiovascular disease, and various health problems. Therefore, real-time ECG signals help assess cardiovascular and related risk factors in the initial stage based on machine learning techniques.
The Internet of Things (IoT) is a recent evolutionary technology that has been the primary focus of researchers for the last two decades. In the IoT, an enormous number of objects are connected together using diverse communications protocols. As a result of this massive object connectivity, a search for the exact service from an object is difficult, and hence the issue of scalability arises. In order to resolve this issue, the idea of integrating the social networking concept into the IoT, generally referred as the Social Internet of Things (SIoT) was introduced. The SIoT is gaining popularity and attracting the attention of the research community due to its flexible and spacious nature. In the SIoT, objects have the ability to find a desired service in a distributed manner by using their neighbors. Although the SIoT technique has been proven to be efficient, heterogeneous devices are growing so exponentially that problems can exist in the search for the right object or service from a huge number of devices. In order to better analyze the performance of services in an SIoT domain, there is a need to impose a certain set of rules on these objects. Our novel contribution in this study is to address the link selection problem in the SIoT by proposing an algorithm that follows the key properties of navigability in small-world networks, such as clustering coefficients, path lengths, and giant components. Our algorithm empowers object navigability in the SIoT by restricting the number of connections for objects, eliminating old links or having fewer connections. We performed an extensive series of experiments by using real network data sets from social networking sites like Brightkite and Facebook. The expected results demonstrate that our algorithm is efficient, especially in terms of reducing path length and increasing the average clustering coefficient. Finally, it reflects overall results in terms of achieving easier network navigation. Our algorithm can easily be applied to a single node or even an entire network.
Deep learning models have been successfully applied in a wide range of fields. The creation of a deep learning framework for analyzing high-performance sequence data have piqued the research community’s interest. N4 acetylcytidine (ac4C) is a post-transcriptional modification in mRNA, is an mRNA component that plays an important role in mRNA stability control and translation. The ac4C method of mRNA changes is still not simple, time consuming, or cost effective for conventional laboratory experiments. As a result, we developed DL-ac4C, a CNN-based deep learning model for ac4C recognition. In the alternative scenario, the model families are well-suited to working in large datasets with a large number of available samples, especially in biological domains. In this study, the DL-ac4C method (deep learning) is compared to non-deep learning (machine learning) methods, regression, and support vector machine. The results show that DL-ac4C is more advanced than previously used approaches. The proposed model improves the accuracy recall area by 9.6 percent and 9.8 percent, respectively, for cross-validation and independent tests. More nuanced methods of incorporating prior bio-logical knowledge into the estimation procedure of deep learning models are required to achieve better results in terms of predictive efficiency and cost-effectiveness. Based on an experiment’s acetylated dataset, the DL-ac4C sequence-based predictor for acetylation sites in mRNA can predict whether query sequences have potential acetylation motifs.
Most multicellular organisms require apoptosis, or programmed cell death, to function properly and survive. On the other hand, morphological and biochemical characteristics of apoptosis have remained remarkably consistent throughout evolution. Apoptosis is thought to have at least three functionally distinct phases: induction, effector, and execution. Recent studies have revealed that reactive oxygen species (ROS) and the oxidative stress could play an essential role in apoptosis. Advanced microscopic imaging techniques allow biologists to acquire an extensive amount of cell images within a matter of minutes which rule out the manual analysis of image data acquisition. The segmentation of cell images is often considered the cornerstone and central problem for image analysis. Currently, the issue of segmentation of mitochondrial cell images via deep learning receives increasing attention. The manual labeling of cell images is time-consuming and challenging to train a pro. As a courtesy method, mitochondrial cell imaging (MCI) is proposed to identify the normal, drug-treated, and diseased cells. Furthermore, cell movement (fission and fusion) is measured to evaluate disease risk. The newly proposed drug-treated, normal, and diseased image segmentation (DNDIS) algorithm can quickly segment mitochondrial cell images without supervision and further segment the highly drug-treated cells in the picture, i.e., normal, diseased, and drug-treated cells. The proposed method is based on the ResNet-50 deep learning algorithm. The dataset consists of 414 images mainly categorised into different sets (drug, diseased, and normal) used microscopically. The proposed automated segmentation method has outperformed and secured high precision (90%, 92%, and 94%); moreover, it also achieves proper training. This study will benefit medicines and diseased cell measurements in medical tests and clinical practices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.