Cancer originates from the uncontrolled growth of healthy cells into a mass. Chromophores, such as hemoglobin and melanin, characterize skin spectral properties, allowing the classification of lesions into different etiologies. Hyperspectral imaging systems gather skin-reflected and transmitted light into several wavelength ranges of the electromagnetic spectrum, enabling potential skin-lesion differentiation through machine learning algorithms. Challenged by data availability and tiny inter and intra-tumoral variability, here we introduce a pipeline based on deep neural networks to diagnose hyperspectral skin cancer images, targeting a handheld device equipped with a low-power graphical processing unit for routine clinical testing. Enhanced by data augmentation, transfer learning, and hyperparameter tuning, the proposed architectures aim to meet and improve the well-known dermatologist-level detection performances concerning both benign-malignant and multiclass classification tasks, being able to diagnose hyperspectral data considering real-time constraints. Experiments show 87% sensitivity and 88% specificity for benign-malignant classification and specificity above 80% for the multiclass scenario. AUC measurements suggest classification performance improvement above 90% with adequate thresholding. Concerning binary segmentation, we measured skin DICE and IOU higher than 90%. We estimated 1.21 s, at most, consuming 5 Watts to segment the epidermal lesions with the U-Net++ architecture, meeting the imposed time limit. Hence, we can diagnose hyperspectral epidermal data assuming real-time constraints.
Currently, one of the most common causes of death worldwide is cancer. The development of innovative methods to support the early and accurate detection of cancers is required to increase the recovery rate of patients. Several studies have shown that medical Hyperspectral Imaging (HSI) combined with artificial intelligence algorithms is a powerful tool for cancer detection. Various preprocessing methods are commonly applied to hyperspectral data to improve the performance of the algorithms. However, there is currently no standard for these methods, and no studies have compared them so far in the medical field. In this work, we evaluated different combinations of preprocessing steps, including spatial and spectral smoothing, Min-Max scaling, Standard Normal Variate normalization, and a median spatial smoothing technique, with the goal of improving tumor detection in three different HSI databases concerning colorectal, esophagogastric, and brain cancers. Two machine learning and deep learning models were used to perform the pixel-wise classification. The results showed that the choice of preprocessing method affects the performance of tumor identification. The method that showed slightly better results with respect to identifing colorectal tumors was Median Filter preprocessing (0.94 of area under the curve). On the other hand, esophagogastric and brain tumors were more accurately identified using Min-Max scaling preprocessing (0.93 and 0.92 of area under the curve, respectively). However, it is observed that the Median Filter method smooths sharp spectral features, resulting in high variability in the classification performance. Therefore, based on these results, obtained with different databases acquired by different HSI instrumentation, the most relevant preprocessing technique identified in this work is Min-Max scaling.
Background: Sociodemographic data indicate the progressive increase in life expectancy and the prevalence of Alzheimer’s disease (AD). AD is raised as one of the greatest public health problems. Its etiology is twofold: on the one hand, non-modifiable factors and on the other, modifiable. Objective: This study aims to develop a processing framework based on machine learning (ML) and optimization algorithms to study sociodemographic, clinical, and analytical variables, selecting the best combination among them for an accurate discrimination between controls and subjects with major neurocognitive disorder (MNCD). Methods: This research is based on an observational-analytical design. Two research groups were established: MNCD group (n = 46) and control group (n = 38). ML and optimization algorithms were employed to automatically diagnose MNCD. Results: Twelve out of 37 variables were identified in the validation set as the most relevant for MNCD diagnosis. Sensitivity of 100%and specificity of 71%were achieved using a Random Forest classifier. Conclusion: ML is a potential tool for automatic prediction of MNCD which can be applied to relatively small preclinical and clinical data sets. These results can be interpreted to support the influence of the environment on the development of AD.
Introducción: Cuando una pareja decide convertirse en padre/madre tiene, entre otras funciones, la socialización de su/s hijo/as. El ser humano nace con una predisposición genética para la socialización, es el entorno en el que se desarrolle el que determine el proceso de socialización. La socialización es un fenómeno tanto individual como social, es decir, a la sociedad le posibilita de transmisión de identidad (tradición, cultura, roles, etc.) y dar continuidad en el tiempo, mientras que al individuo le permite el aprendizaje para el normal funcionamiento dentro de la misma. Los distintos procesos de socialización son la primaria, secundaria y virtual como proceso de aprendizaje de una realidad tácita. Metodología: Se ha utilizado el método PRISMA cuyo objetivo es realizar una investigación documental, es decir, recopilar información ya existente sobre la influencia de la virtualización en el entorno familiar, social y académico. Resultados: Se han localizado 490 trabajos. En los últimos 25 años, el número de publicaciones tiene tendencia al incremento. Siendo las áreas de ciencias computacionales e investigación en educación las áreas que más publicaciones mostraron. Cuando se analizan los países que más publican sobre esta temática, España se encuentra en segundo lugar, solo superada por USA. Discusión: El tradicional sistema de socialización ha modificado su centro de referencia: de lo real a lo virtual. Además, este nuevo sistema, ha cambiado el proceso educativo con el asentamiento de las NTIC s, proporcionando nuevos desafíos al sistema educativo. Conclusiones: Tanto el proceso de virtualización como la digitalización ya forma parte imprescindible en todos los ámbitos tanto de socialización como de educación. El nuevo modelo de socialización ha venido para quedarse, no se trata de una moda pasajera. El nihilismo no es una opción y la negativa a la actualización es un riesgo que la sociedad no puede permitirse.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.