Herein, a new paradigm based on deep learning was proposed that allows the extraction of fine-grained differences between skin lesions in pixel units for high accuracy classification of skin lesions. As basic feature information for a dermoscopic image of a skin region, 50 different features were extracted based on the edge, color, and texture features of the skin lesion image. For the edge features, a line-segment-type analysis algorithm was used, wherein the visual information of a dermoscopic image was precisely analyzed in terms of the units of pixels and was transformed into a structured pattern. Regarding the color features of skin lesions, the dermoscopic image was transformed into multiple color models, and the features were acquired by analyzing histograms showing information regarding the distribution of pixel intensities. Subsequently, texture features were extracted by applying the well-known Law’s texture energy measure algorithm. Feature data (50 × 256) generated via the feature extraction process above were used to classify skin lesions via a one-dimensional (1D) convolution layer-based classification model. Because the architecture of the designed model comprises parallel 1D convolution layers, fine-grained features of the dermoscopic image can be identified using different parameters. To evaluate the performance of the proposed method, datasets from the 2017 and 2018 International Skin Imaging Collaboration were used. A comparison of results yielded by well-known classification models and other models reported in the literature show the superiority of the proposed model. Additionally, the proposed method achieves an accuracy exceeding 88%.
HAR technology uses computer and machine vision to analyze human activity and gestures by processing sensor data. The 3-axis acceleration and gyro sensor data are particularly effective in measuring human activity as they can calculate movement speed, direction, and angle. Our paper emphasizes the importance of developing a method to expand the recognition range of human activity due to the many types of activities and similar movements that can result in misrecognition. The proposed method uses 3-axis acceleration and gyro sensor data to visually define human activity patterns and improve recognition accuracy, particularly for similar activities. The method involves converting the sensor data into an image format, removing noise using time series features, generating visual patterns of waveforms, and standardizing geometric patterns. The resulting data (1D, 2D, and 3D) can simultaneously process each type by extracting pattern features using parallel convolution layers and performing classification by applying two fully connected layers in parallel to the merged data from the output data of three convolution layers. The proposed neural network model achieved 98.1% accuracy and recognized 18 types of activities, three times more than previous studies, with a shallower layer structure due to the enhanced input data features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.