Background/PurposeAcral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions.MethodsA total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist’s and non-expert’s evaluation.ResultsThe accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert’s evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden’s index like 0.6795, 0.6073, which were similar score with the expert.ConclusionAlthough further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.
We present a Lambertian photometric stereo algorithm robust to specularities and shadows and it is based on a maximum feasible subsystem (Max FS) framework. A Big-M method is developed to obtain the maximum subset of images that satisfy the Lambertian constraint among the whole set of captured photometric stereo images which include non-Lambertian reflections such as specularities and shadows. Our algorithm employs purely algebraic pixel-wise optimization without relying on probabilistic/physical reasoning or initialization, and it guarantees the global optimality. It can be applied to the image sets with the number of images ranging from four to hundreds, and we show that the computation time is reasonably short for a medium number of images (10~100). Experiments are carried out with various objects to demonstrate the effectiveness of the algorithm.
In this paper, a globally optimal algorithm based on a maximum feasible subsystem framework is proposed for robust pairwise registration of point cloud data. Registration is formulated as a branch-and-bound problem with mixed-integer linear programming. Among the putative matches of three-dimensional (3D) features between two sets of range data, the proposed algorithm finds the maximum number of geometrically correct correspondences in the presence of incorrect matches, and it estimates the transformation parameters in a globally optimal manner. The optimization requires no initialization of transformation parameters. Experimental results demonstrated that the presented algorithm was more accurate and reliable than state-of-the-art registration methods and showed robustness against severe outliers/mismatches. This global optimization technique was highly effective, even when the geometric overlap between the datasets was very small.
Facial expression recognition is one of the most important tasks in human-computer interaction, affective computing, computer vision, and related work. Feature additive pooling and progressive finetuning of the convolutional neural network (CNN) for facial expression recognition in a static image are introduced. Network is proposed that partially employs the visual geometry group (VGG)-face model pretrained on a VGG-face dataset. The characteristics and distribution of the facial expression images in each database are biased according to the purpose of the publicly available facial database used. To alleviate this problem, a CNN model is developed that merges progressively fine-tuned CNNs into a single network. Experiments were carried out to validate the presented method using facial expression images from the Cohn-Kanade + , Karolinska directed emotional face, and Japanese female facial expression databases, and cross-database evaluation results show that the method is superior to state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.