Oral cancer is a major global health issue accounting for 177,384 deaths in 2018 and it is most prevalent in low-and middle-income countries. Enabling automation in the identification of potentially malignant and malignant lesions in the oral cavity would potentially lead to low-cost and early diagnosis of the disease. Building a large library of well-annotated oral lesions is key. As part of the MeMoSA ® (Mobile Mouth Screening Anywhere) project, images are currently in the process of being gathered from clinical experts from across the world, who have been provided with an annotation tool to produce rich labels. A novel strategy to combine bounding box annotations from multiple clinicians is provided in this paper. Further to this, deep neural networks were used to build automated systems, in which complex patterns were derived for tackling this difficult task. Using the initial data gathered in this study, two deep learning based computer vision approaches were assessed for the automated detection and classification of oral lesions for the early detection of oral cancer, these were image classification with ResNet-101 and object detection with the Faster R-CNN. Image classification achieved an F1 score of 87.07% for identification of images that contained lesions and 78.30% for the identification of images that required referral. Object detection achieved an F1 score of 41.18% for the detection of lesions that required referral. Further performances are reported with respect to classifying according to the type of referral decision. Our initial results demonstrate deep learning has the potential to tackle this challenging task. INDEX TERMS Composite annotation, deep learning, image classification, object detection, oral cancer, oral potentially malignant disorders.
Oral cancer is most prevalent in low-and middle-income countries where it is associated with late diagnosis. A significant factor for this is the limited access to specialist diagnosis. The use of artificial intelligence for decision making on oral cavity images has the potential to improve cancer management and survival rates. This study forms part of the MeMoSA ® (Mobile Mouth Screening Anywhere) project. In this paper, we extended on our previous deep learning work and focused on the binary image classification of 'referral' vs. 'non-referral'. Transfer learning was applied, with several common pre-trained deep convolutional neural network architectures compared for the task of finetuning to a small oral image dataset. Improvements to our previous work were made, with an accuracy of 80.88% achieved and a corresponding sensitivity of 85.71% and specificity of 76.42%.
Objective To evaluate the accuracy of MeMoSA®, a mobile phone application to review images of oral lesions in identifying oral cancers and oral potentially malignant disorders requiring referral. Subjects and Methods A prospective study of 355 participants, including 280 with oral lesions/variants was conducted. Adults aged ≥18 treated at tertiary referral centres were included. Images of the oral cavity were taken using MeMoSA®. The identification of the presence of lesion/variant and referral decision made using MeMoSA® were compared to clinical oral examination, using kappa statistics for intra‐rater agreement. Sensitivity, specificity, concordance and F1 score were computed. Images were reviewed by an off‐site specialist and inter‐rater agreement was evaluated. Images from sequential clinical visits were compared to evaluate observable changes in the lesions. Results Kappa values comparing MeMoSA® with clinical oral examination in detecting a lesion and referral decision was 0.604 and 0.892, respectively. Sensitivity and specificity for referral decision were 94.0% and 95.5%. Concordance and F1 score were 94.9% and 93.3%, respectively. Inter‐rater agreement for a referral decision was 0.825. Progression or regression of lesions were systematically documented using MeMoSA®. Conclusion Referral decisions made through MeMoSA® is highly comparable to clinical examination demonstrating it is a reliable telemedicine tool to facilitate the identification of high‐risk lesions for early management.
Oral cancer has been recognized as a significant challenge to healthcare. In Malaysia, numerous patients frequently present with later stages of cancers to the highly subsidized public healthcare facilities. Such a trend contributes to a substantial social and economic burden. This study aims to determine the cost of treating oral potentially malignant disorders (OPMD) and oral cancer from a public healthcare provider’s perspective. Medical records from two tertiary public hospitals were systematically abstracted to identify events and resources consumed retrospectively from August 2019 to January 2020. The cost accrued was used to estimate annual initial and maintenance costs via two different methods- inverse probability weighting (IPW) and unweighted average. A total of 86 OPMD and 148 oral cancer cases were included. The initial phase mean unadjusted cost was USD 2,861 (SD = 2,548) in OPMD and USD 38,762 (SD = 12,770) for the treatment of cancer. Further annual estimate of initial phase cost based on IPW method for OPMD, early and late-stage cancer was USD 3,561 (SD = 4,154), USD 32,530 (SD = 12,658) and USD 44,304 (SD = 16,240) respectively. Overall cost of late-stage cancer was significantly higher than early-stage by USD 11,740; 95% CI [6,853 to 16,695]; p< 0.001. Higher surgical care and personnel cost predominantly contributed to the larger expenditure. In contrast, no significant difference was identified between both cancer stages in the maintenance phase, USD 700; 95% CI [-1,142 to 2,541]; p = 0.457. A crude comparison of IPW estimate with unweighted average displayed a significant difference in the initial phase, with the latter being continuously higher across all groups. IPW method was shown to be able to use data more efficiently by adjusting cost according to survival and follow-up. While cost is not a primary consideration in treatment recommendations, our analysis demonstrates the potential economic benefit of investing in preventive medicine and early detection.
Oral cancer is a major health issue among low-and middleincome countries due to the late diagnosis. Automated algorithms and tools have the potential to identify oral lesions for early detection of oral cancer. In this paper, we aim to develop a novel deep learning framework named D'OraCa to classify oral lesions using photographic images. We are the first to develop a mouth landmark detection model for the oral images and incorporate it into the oral lesion classification model as a guidance to improve the classification accuracy. We evaluate the performance of five different deep convolutional neural networks and Mo-bileNetV2 was chosen as the feature extractor for our proposed mouth 2 J.H. Lim et al. landmark detection model. Quantitative and qualitative results demonstrate the effectiveness of the mouth landmark detection model in guiding the classification model to classify the oral lesions into four different referral decision classes. We train our proposed mouth landmark model on a combination of five datasets, containing 221,565 images. Then, we train and evaluate our proposed classification model with mouth landmark guidance using 2,455 oral images. The results are consistent with clinicians and the F1 score of the classification model is improved to 61.68%.
Objective To describe the development of a platform for image collection and annotation that resulted in a multi‐sourced international image dataset of oral lesions to facilitate the development of automated lesion classification algorithms. Materials and Methods We developed a web‐interface, hosted on a web server to collect oral lesions images from international partners. Further, we developed a customised annotation tool, also a web‐interface for systematic annotation of images to build a rich clinically labelled dataset. We evaluated the sensitivities comparing referral decisions through the annotation process with the clinical diagnosis of the lesions. Results The image repository hosts 2474 images of oral lesions consisting of oral cancer, oral potentially malignant disorders and other oral lesions that were collected through MeMoSA® UPLOAD. Eight‐hundred images were annotated by seven oral medicine specialists on MeMoSA®ANNOTATE, to mark the lesion and to collect clinical labels. The sensitivity in referral decision for all lesions that required a referral for cancer management/surveillance was moderate to high depending on the type of lesion (64.3%–100%). Conclusion This is the first description of a database with clinically labelled oral lesions. This database could accelerate the improvement of AI algorithms that can promote the early detection of high‐risk oral lesions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.