This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
Healthcare systems worldwide generate vast amounts of data from many different sources. Although of high complexity for a human being, it is essential to determine the patterns and minor variations in the genomic, radiological, laboratory, or clinical data that reliably differentiate phenotypes or allow high predictive accuracy in health-related tasks. Convolutional neural networks (CNN) are increasingly applied to image data for various tasks. Its use for non-imaging data becomes feasible through different modern machine learning techniques, converting non-imaging data into images before inputting them into the CNN model. Considering also that healthcare providers do not solely use one data modality for their decisions, this approach opens the door for multi-input/mixed data models which use a combination of patient information, such as genomic, radiological, and clinical data, to train a hybrid deep learning model. Thus, this reflects the main characteristic of artificial intelligence: simulating natural human behavior. The present review focuses on key advances in machine and deep learning, allowing for multi-perspective pattern recognition across the entire information set of patients in spine surgery. This is the first review of artificial intelligence focusing on hybrid models for deep learning applications in spine surgery, to the best of our knowledge. This is especially interesting as future tools are unlikely to use solely one data modality. The techniques discussed could become important in establishing a new approach to decision-making in spine surgery based on three fundamental pillars: (1) patient-specific, (2) artificial intelligence-driven, (3) integrating multimodal data. The findings reveal promising research that already took place to develop multi-input mixed-data hybrid decision-supporting models. Their implementation in spine surgery may hence be only a matter of time.
Background: Ex vivo fluorescent confocal microscopy (FCM) is a novel and effective method for a fast-automatized histological tissue examination. In contrast, conventional diagnostic methods are primarily based on the skills of the histopathologist. In this study, we investigated the potential of convolutional neural networks (CNNs) for automatized classification of oral squamous cell carcinoma via ex vivo FCM imaging for the first time. Material and Methods: Tissue samples from 20 patients were collected, scanned with an ex vivo confocal microscope immediately after resection, and investigated histopathologically. A CNN architecture (MobileNet) was trained and tested for accuracy. Results: The model achieved a sensitivity of 0.47 and specificity of 0.96 in the automated classification of cancerous tissue in our study. Conclusion: In this preliminary work, we trained a CNN model on a limited number of ex vivo FCM images and obtained promising results in the automated classification of cancerous tissue. Further studies using large sample sizes are warranted to introduce this technology into clinics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.