The prevalence of skin diseases has increased dramatically in recent decades, and they are now considered major chronic diseases globally. People suffer from a broad spectrum of skin diseases, whereas skin tumors are potentially aggressive and life-threatening. However, the severity of skin tumors can be managed (by treatment) if diagnosed early. Health practitioners usually apply manual or computer vision-based tools for skin tumor diagnosis, which may cause misinterpretation of the disease and lead to a longer analysis time. However, cutting-edge technologies such as deep learning using the federated machine learning approach have enabled health practitioners (dermatologists) in diagnosing the type and severity level of skin diseases. Therefore, this study proposes an adaptive federated machine learning-based skin disease model (using an adaptive ensemble convolutional neural network as the core classifier) in a step toward an intelligent dermoscopy device for dermatologists. The proposed federated machine learning-based architecture consists of intelligent local edges (dermoscopy) and a global point (server). The proposed architecture can diagnose the type of disease and continuously improve its accuracy. Experiments were carried out in a simulated environment using the International Skin Imaging Collaboration (ISIC) 2019 dataset (dermoscopy images) to test and validate the model’s classification accuracy and adaptability. In the future, this study may lead to the development of a federated machine learning-based (hardware) dermoscopy device to assist dermatologists in skin tumor diagnosis.
Cyber security is one of the major concerns of today’s connected world. For all the platforms of today’s communication technology such as wired, wireless, local and remote access, the hackers are present to corrupt the system functionalities, circumvent the security measures and steal sensitive information. Amongst many techniques of hackers, port scanning and Distributed Denial of Service (DDoS) attacks are very common. In this paper, the benefits of machine learning are taken into consideration for classification of port scanning and DDoS attacks in a mix of normal and attack traffic. Different machine learning algorithms are trained and tested on a recently published benchmark dataset (CICIDS2017) to identify the best performing algorithms on the data which contains more recent vectors of port scanning and DDoS attacks. The classification results show that all the variants of discriminant analysis and Support Vector Machine (SVM) provide good testing accuracy i.e. more than 90%. According to a subjective rating criterion mentioned in this paper, 9 algorithms from a set of machine learning experiments receive the highest rating (good) as they provide more than 85% classification (testing) accuracy out of 22 total algorithms. This comparative analysis is further extended to observe training performance of machine learning models through k-fold cross validation, Area Under Curve (AUC) analysis of the Receiver Operating Characteristic (ROC) curves, and dimensionality reduction using the Principal Component Analysis (PCA). To the best of our knowledge, a comprehensive comparison of various machine learning algorithms on CICIDS2017 dataset is found to be deficient for port scanning and DDoS attacks while considering such recent features of attack.
Image classification of a visual scene based on visibility is significant due to the rise in readily available automated solutions. Currently, there are only two known spectrums of image visibility i.e., dark, and bright. However, normal environments include semi-dark scenarios. Hence, visual extremes that will lead to the accurate extraction of image features should be duly discarded. Fundamentally speaking there are two broad methods to perform visual scene-based image classification, i.e., machine learning (ML) methods and computer vision methods. In ML, the issues of insufficient data, sophisticated hardware and inadequate image classifier training time remain significant problems to be handled. These techniques fail to classify the visual scene-based images with high accuracy. The other alternative is computer vision (CV) methods, which also have major issues. CV methods do provide some basic procedures which may assist in such classification but, to the best of our knowledge, no CV algorithm exists to perform such classification, i.e., these do not account for semi-dark images in the first place. Moreover, these methods do not provide a well-defined protocol to calculate images’ content visibility and thereby classify images. One of the key algorithms for calculation of images’ content visibility is backed by the HSL (hue, saturation, lightness) color model. The HSL color model allows the visibility calculation of a scene by calculating the lightness/luminance of a single pixel. Recognizing the high potential of the HSL color model, we propose a novel framework relying on the simple approach of the statistical manipulation of an entire image’s pixel intensities, represented by HSL color model. The proposed algorithm, namely, Relative Perceived Luminance Classification (RPLC) uses the HSL (hue, saturation, lightness) color model to correctly identify the luminosity values of the entire image. Our findings prove that the proposed method yields high classification accuracy (over 78%) with a small error rate. We show that the computational complexity of RPLC is much less than that of the state-of-the-art ML algorithms.
Super-pixels represent perceptually similar visual feature vectors of the image. Super-pixels are the meaningful group of pixels of the image, bunched together based on the color and proximity of singular pixel. Computation of super-pixels is highly affected in terms of accuracy if the image has high pixel intensities, i.e., a semi-dark image is observed. For computation of super-pixels, a widely used method is SLIC (Simple Linear Iterative Clustering), due to its simplistic approach. The SLIC is considerably faster than other state-of-the-art methods. However, it lacks in functionality to retain the content-aware information of the image due to constrained underlying clustering technique. Moreover, the efficiency of SLIC on semi-dark images is lower than bright images. We extend the functionality of SLIC to several computational distance measures to identify potential substitutes resulting in regular and accurate image segments. We propose a novel SLIC extension, namely, SLIC++ based on hybrid distance measure to retain content-aware information (lacking in SLIC). This makes SLIC++ more efficient than SLIC. The proposed SLIC++ does not only hold efficiency for normal images but also for semi-dark images. The hybrid content-aware distance measure effectively integrates the Euclidean super-pixel calculation features with Geodesic distance calculations to retain the angular movements of the components present in the visual image exclusively targeting semi-dark images. The proposed method is quantitively and qualitatively analyzed using the Berkeley dataset. We not only visually illustrate the benchmarking results, but also report on the associated accuracies against the ground-truth image segments in terms of boundary precision. SLIC++ attains high accuracy and creates content-aware super-pixels even if the images are semi-dark in nature. Our findings show that SLIC++ achieves precision of 39.7%, outperforming the precision of SLIC by a substantial margin of up to 8.1%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.