The latest developments in the smartphone-based skin cancer diagnosis application allow simple ways for portable melanoma risk assessment and diagnosis for early skin cancer detection. Due to the trade-off problem (time complexity and error rate) on using a smartphone to run a machine learning algorithm for image analysis, most of the skin cancer diagnosis apps execute the image analysis on the server. In this study, we investigate the performance of skin cancer images detection and classification on android devices using the MobileNet v2 deep learning model. We compare the performance of several aspects; object detection and classification method, computer and android based image analysis, image acquisition method, and setting parameter. Skin cancer actinic Keratosis and Melanoma are used to test the performance of the proposed method. Accuracy, sensitivity, specificity, and running time of the testing methods are used for the measurement. Based on the experiment results, the best parameter for the MobileNet v2 model on android using images from the smartphone camera produces 95% accuracy for object detection and 70% accuracy for classification. The performance of the android app for object detection and classification model was feasible for the skin cancer analysis. Android-based image analysis remains within the threshold of computing time that denotes convenience for the user and has the same performance accuracy with the computer for the high-quality images. These findings motivated the development of disease detection processing on android using a smartphone camera, which aims to achieve real-time detection and classification with high accuracy.
Many real-world situations such as bad weather may result in hazy environments. Images captured in these hazy conditions will have low image quality due to microparticles in the air. The microparticles light to scatter and absorb, resulting in hazy images with various effects. In recent years, image dehazing has been researched in depth to handle images captured in these conditions. Various methods were developed, from traditional methods to deep learning methods. Traditional methods focus more on the use of statistical prior. These statistical prior have weaknesses in certain conditions. This paper proposes a novel architecture based on PDR-Net by using a pyramid dilated convolution and pre-processing modules, processing modules, post-processing modules, and attention applications. The proposed network is trained to minimize L1 loss and perceptual loss with the O-Haze dataset. To evaluate our architecture's result, we used structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and color difference as an objective assessment and psychovisual experiment as a subjective assessment. Our architecture obtained better results than the previous method using the O-Haze dataset with an SSIM of 0.798, a PSNR of 25.39, but not better on the color difference. The SSIM and PSNR results were strengthened by using subjective assessments and 65 respondents, most of whom chose the results of the restoration of the image produced by our architecture.
White blood cell can give information about someone’s health. Imbalanced amount of white blood cells indicates someone’s disease. The disease detection using white blood cell can be done by using flow cytometry, however the use of the device has many drawbacks. The drawbacks can be solved by using computer aided classification. However, another problem arises, that the available white blood cell dataset has small dataset and the class distribution is imbalanced. This problem can be solved by using classic data augmentation and DCGAN to create synthetic image in order to balance the amount of white blood cell dataset. The balanced dataset then trained using ResNet50 model as the classification model. Accuracy, precision, recall and Fl-score are used as the performance measures of the classification. The result shows that the model obtain accuracy of 82.5%. ResNet50 was chosen because it can produce good accuracy when used in medical image classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.