This research work is aimed at investing skin lesions classification problem using Convolution Neural Network (CNN) using cloud-server architecture. Using the cloud services and CNN, a real-time mobile-enabled skin lesions classification expert system “i-Rash” is proposed and developed. i-Rash aimed at early diagnosis of acne, eczema and psoriasis at remote locations. The classification model used in the “i-Rash” is developed using the CNN model “SqueezeNet”. The transfer learning approach is used for training the classification model and model is trained and tested on 1856 images. The benefit of using SqueezeNet results in a limited size of the trained model i.e. only 3 MB. For classifying new image, cloud-based architecture is used, and the trained model is deployed on a server. A new image is classified in fractions of seconds with overall accuracy, sensitivity and specificity of 97.21%, 94.42% and 98.14% respectively. i-Rash can serve in initial classification of skin lesions, hence, can play a very important role early classification of skin lesions for people living in remote areas.
Skin diseases cases are increasing on a daily basis and are difficult to handle due to the global imbalance between skin disease patients and dermatologists. Skin diseases are among the top 5 leading cause of the worldwide disease burden. To reduce this burden, computer-aided diagnosis systems (CAD) are highly demanded. Single disease classification is the major shortcoming in the existing work. Due to the similar characteristics of skin diseases, classification of multiple skin lesions is very challenging. This research work is an extension of our existing work where a novel classification scheme is proposed for multi-class classification. The proposed classification framework can classify an input skin image into one of the six non-overlapping classes i.e., healthy, acne, eczema, psoriasis, benign and malignant melanoma. The proposed classification framework constitutes four steps, i.e., pre-processing, segmentation, feature extraction and classification. Different image processing and machine learning techniques are used to accomplish each step. 10-fold cross-validation is utilized, and experiments are performed on 1800 images. An accuracy of 94.74% was achieved using Quadratic Support Vector Machine. The proposed classification scheme can help patients in the early classification of skin lesions.
Multi-access edge computing (MEC) is a new leading technology for meeting the demands of key performance indicators (KPIs) in 5G networks. However, in a rapidly changing dynamic environment, it is hard to find the optimal target server for processing offloaded tasks because we do not know the end users’ demands in advance. Therefore, quality of service (QoS) deteriorates because of increasing task failures and long execution latency from congestion. To reduce latency and avoid task failures from resource-constrained edge servers, vertical offloading between mobile devices with local-edge collaboration or with local edge-remote cloud collaboration have been proposed in previous studies. However, they ignored the nearby edge server in the same tier that has excess computing resources. Therefore, this paper introduces a fuzzy decision-based cloud-MEC collaborative task offloading management system called FTOM, which takes advantage of powerful remote cloud-computing capabilities and utilizes neighboring edge servers. The main objective of the FTOM scheme is to select the optimal target node for task offloading based on server capacity, latency sensitivity, and the network’s condition. Our proposed scheme can make dynamic decisions where local or nearby MEC servers are preferred for offloading delay-sensitive tasks, and delay-tolerant high resource-demand tasks are offloaded to a remote cloud server. Simulation results affirm that our proposed FTOM scheme significantly improves the rate of successfully executing offloaded tasks by approximately 68.5%, and reduces task completion time by 66.6%, when compared with a local edge offloading (LEO) scheme. The improved and reduced rates are 32.4% and 61.5%, respectively, when compared with a two-tier edge orchestration-based offloading (TTEO) scheme. They are 8.9% and 47.9%, respectively, when compared with a fuzzy orchestration-based load balancing (FOLB) scheme, approximately 3.2% and 49.8%, respectively, when compared with a fuzzy workload orchestration-based task offloading (WOTO) scheme, and approximately 38.6%% and 55%, respectively, when compared with a fuzzy edge-orchestration based collaborative task offloading (FCTO) scheme.
Hand, Foot and Mouth Disease (HFMD) is a highly contagious paediatric disease showing up symptoms like fever, diarrhoea, oral ulcers and rashes on the hands and foot, and even in the mouth. This disease has become an epidemic with several outbreaks in many Asian-Pacific countries with the basic reproduction number R 0 > 1. HFMD's diagnosis is very challenging as its lesion pattern may appear quite similar to other skin diseases such as herpangina, aseptic meningitis, and poliomyelitis. Therefore, clinical symptoms are essential besides skin lesion's pattern and position for precise diagnose of this disease. A deep learning-based HFMD detection system can play a significant role in the digital diagnosis of this disease. Various machine learning and deep learning architectures have been proposed for skin disease diagnosis and classification. However, these models are limited to the image classification problem. The diagnosis of similar appearing skin diseases using the image classification approach may result in misclassification or misdiagnosis of the disease. Parallel integration of clinical symptoms and images can improve disease diagnosis and classification performance. However, no deep learning architecture has been developed to diagnose HFMD disease from images and clinical data. This paper has proposed a novel Hybrid Deep Neural Networks integrating Multi-Layer Perceptron (MLP) network and Convolutional Neural Network into a single framework for the diagnosis of HFMD using the integrated features from clinical and image data. The proposed Hybrid Deep Neural Networks is particularly a multi branched model comprising of Multi-Layer Perceptron (MLP) network in the first branch to extract the clinical features and the modified pre-trained CNN architecture: MobileNet or NasNetMobile in the second branch to extract the features from skin disease lesion images. The features learnt from both the branches are merged to form an integrated feature from clinical data and images, which is fed to the subsequent classification network. We conducted several experiments employing image data only, clinical data only and both sources of data. The analyses compared and evaluated the performance of a typical MLP model and CNN model with our proposed Hybrid Deep Neural Networks. The novel approach promotes the existing image classification model and clinical symptoms based disease classification model, particularly the MLP model. From the cross-validated experiments, the results reveal that the proposed Hybrid Deep Neural Networks can diagnose the disease 99%-100% accurately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.