Automated skin lesion diagnosis from dermoscopic images is a difficult process due to several notable problems such as artefacts (hairs), irregularity, lesion shape, and irrelevant features extraction. These problems make the segmentation and classification process difficult. In this research, we proposed an optimized colour feature (OCF) of lesion segmentation and deep convolutional neural network (DCNN)‐based skin lesion classification. A hybrid technique is proposed to remove the artefacts and improve the lesion contrast. Then, colour segmentation technique is presented known as OCFs. The OCF approach is further improved by an existing saliency approach, which is fused by a novel pixel‐based method. A DCNN‐9 model is implemented to extract deep features and fused with OCFs by a novel parallel fusion approach. After this, a normal distribution‐based high‐ranking feature selection technique is utilized to select the most robust features for classification. The suggested method is evaluated on ISBI series (2016, 2017, and 2018) datasets. The experiments are performed in two steps and achieved average segmentation accuracy of more than 90% on selected datasets. Moreover, the achieve classification accuracy of 92.1%, 96.5%, and 85.1%, respectively, on all three datasets shows that the presented method has remarkable performance.
Multiclass classification of brain tumors is an important area of research in the field of medical imaging. Since accuracy is crucial in the classification, a number of techniques are introduced by computer vision researchers; however, they still face the issue of low accuracy. In this article, a new automated deep learning method is proposed for the classification of multiclass brain tumors. To realize the proposed method, the Densenet201 Pre-Trained Deep Learning Model is fine-tuned and later trained using a deep transfer of imbalanced data learning. The features of the trained model are extracted from the average pool layer, which represents the very deep information of each type of tumor. However, the characteristics of this layer are not sufficient for a precise classification; therefore, two techniques for the selection of features are proposed. The first technique is Entropy–Kurtosis-based High Feature Values (EKbHFV) and the second technique is a modified genetic algorithm (MGA) based on metaheuristics. The selected features of the GA are further refined by the proposed new threshold function. Finally, both EKbHFV and MGA-based features are fused using a non-redundant serial-based approach and classified using a multiclass SVM cubic classifier. For the experimental process, two datasets, including BRATS2018 and BRATS2019, are used without increase and have achieved an accuracy of more than 95%. The precise comparison of the proposed method with other neural nets shows the significance of this work.
As the number of internet users increases so does the number of malicious attacks using malware. The detection of malicious code is becoming critical, and the existing approaches need to be improved. Here, we propose a feature fusion method to combine the features extracted from pre-trained AlexNet and Inception-v3 deep neural networks with features attained using segmentation-based fractal texture analysis (SFTA) of images representing the malware code. In this work, we use distinctive pre-trained models (AlexNet and Inception-V3) for feature extraction. The purpose of deep convolutional neural network (CNN) feature extraction from two models is to improve the malware classifier accuracy, because both models have characteristics and qualities to extract different features. This technique produces a fusion of features to build a multimodal representation of malicious code that can be used to classify the grayscale images, separating the malware into 25 malware classes. The features that are extracted from malware images are then classified using different variants of support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), and other classifiers. To improve the classification results, we also adopted data augmentation based on affine image transforms. The presented method is evaluated on a Malimg malware image dataset, achieving an accuracy of 99.3%, which makes it the best among the competing approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.