Diabetes mellitus (DM) is a severe chronic disease that affects human health and has a high prevalence worldwide. Research has shown that half of the diabetic people throughout the world are unaware that they have DM and its complications are increasing, which presents new research challenges and opportunities. In this paper, we propose a preemptive diagnosis method for diabetes mellitus (DM) to assist or complement the early recognition of the disease in countries with low medical expert densities. Diabetes data are collected from the Zewditu Memorial Hospital (ZMHDD) in Addis Ababa, Ethiopia. Light Gradient Boosting Machine (LightGBM) is one of the most recent successful research findings for the gradient boosting framework that uses tree-based learning algorithms. It has low computational complexity and, therefore, is suited for applications in limited capacity regions such as Ethiopia. Thus, in this study, we apply the principle of LightGBM to develop an accurate model for the diagnosis of diabetes. The experimental results show that the prepared diabetes dataset is informative to predict the condition of diabetes mellitus. With accuracy, AUC, sensitivity, and specificity of 98.1%, 98.1%, 99.9%, and 96.3%, respectively, the LightGBM model outperformed KNN, SVM, NB, Bagging, RF, and XGBoost in the case of the ZMHDD dataset.
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach’s performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique tostrip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interestis selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the threeexperimental setups was 0.86.
The orchestration of software-defined networks (SDN) and the internet of things (IoT) has revolutionized the computing fields. These include the broad spectrum of connectivity to sensors and electronic appliances beyond standard computing devices. However, these networks are still vulnerable to botnet attacks such as distributed denial of service, network probing, backdoors, information stealing, and phishing attacks. These attacks can disrupt and sometimes cause irreversible damage to several sectors of the economy. As a result, several machine learning-based solutions have been proposed to improve the real-time detection of botnet attacks in SDN-enabled IoT networks. The aim of this review is to investigate research studies that applied machine learning techniques for deterring botnet attacks in SDN-enabled IoT networks. Initially the first major botnet attacks in SDN-IoT networks have been thoroughly discussed. Secondly a commonly used machine learning techniques for detecting and mitigating botnet attacks in SDN-IoT networks are discussed. Finally, the performance of these machine learning techniques in detecting and mitigating botnet attacks is presented in terms of commonly used machine learning models’ performance metrics. Both classical machine learning (ML) and deep learning (DL) techniques have comparable performance in botnet attack detection. However, the classical ML techniques require extensive feature engineering to achieve optimal features for efficient botnet attack detection. Besides, they fall short of detecting unforeseen botnet attacks. Furthermore, timely detection, real-time monitoring, and adaptability to new types of attacks are still challenging tasks in classical ML techniques. These are mainly because classical machine learning techniques use signatures of the already known malware both in training and after deployment.
Health is a critical condition for living things, even before the technology exists. Nowadays the healthcare domain provides a lot of scope for research as it has extremely evolved. The most researched areas of health sectors include diabetes mellitus (DM), breast cancer, brain tumor, etc. DM is a severe chronic disease that affects human health and has a high rate throughout the world. Early prediction of DM is important to reduce its risk and even avoid it. In this study, we propose a DM prediction model based on global and local learner algorithms. The proposed global and local learners stacking (GLLS) model; combines the prediction algorithms from two largely different but complementary machine learning paradigms, specifically XGBoost and NB from global learning whereas kNN and SVM (with RBF kernel) from local learning and aggregates them by stacking ensemble technique using LR as meta-learner. The effectiveness of the GLLS model was proved by comparing several performance measures and the results of different contrast experiments. The evaluation results on UCI Pima Indian diabetes data-set (PIDD) indicates the model has achieved the better prediction performance of 99.5%, 99.5%, 99.5%, 99.1%, and 100% in terms of accuracy, AUC, F1 score, sensitivity, and specificity respectively, compared to other research results mentioned in the literature. Moreover, to better validate the GLLS model performance, three additional medical data sets; Messidor, WBC, ILPD, are considered and the model also achieved an accuracy of 82.1%, 98.6%, and 89.3% respectively. Experimental results proved the effectiveness and superiority of our proposed GLLS model.
The Internet of things (IoT) is being used in a variety of industries, including agriculture, the military, smart cities and smart grids, and personalized health care. It is also being used to control critical infrastructure. Nevertheless, because the IoT lacks security procedures and lack the processing power to execute computationally costly antimalware apps, they are susceptible to malware attacks. In addition, the conventional method by which malware-detection mechanisms identify a threat is through known malware fingerprints stored in their database. However, with the ever-evolving and drastic increase in malware threats in the IoT, it is not enough to have traditional antimalware software in place, which solely defends against known threats. Consequently, in this paper, a lightweight deep learning model for an SDN-enabled IoT framework that leverages the underlying IoT resource-constrained devices by provisioning computing resources to deploy instant protection against botnet malware attacks is proposed. The proposed model can achieve 99% precision, recall, and F1 score and 99.4% accuracy. The execution time of the model is 0.108 milliseconds with 118 KB size and 19,414 parameters. The proposed model can achieve performance with high accuracy while utilizing fewer computational resources and addressing resource-limitation issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.