Multispectral autofluorescence lifetime imaging (maFLIM) can be used to clinically image a plurality of metabolic and biochemical autofluorescence biomarkers of oral epithelial dysplasia and cancer. This study tested the hypothesis that maFLIM-derived autofluorescence biomarkers can be used in machine-learning (ML) models to discriminate dysplastic and cancerous from healthy oral tissue. Clinical widefield maFLIM endoscopy imaging of cancerous and dysplastic oral lesions was performed at two clinical centers. Endoscopic maFLIM images from 34 patients acquired at one of the clinical centers were used to optimize ML models for automated discrimination of dysplastic and cancerous from healthy oral tissue. A computer-aided detection system was developed and applied to a set of endoscopic maFLIM images from 23 patients acquired at the other clinical center, and its performance was quantified in terms of the area under the receiver operating characteristic curve (ROC-AUC). Discrimination of dysplastic and cancerous from healthy oral tissue was achieved with an ROC-AUC of 0.81. This study demonstrates the capabilities of widefield maFLIM endoscopy to clinically image autofluorescence biomarkers that can be used in ML models to discriminate dysplastic and cancerous from healthy oral tissue. Widefield maFLIM endoscopy thus holds potential for automated in situ detection of oral dysplasia and cancer.
In some applications of biomedical imaging, a linear mixture model can represent the constitutive elements (end-members) and their contributions (abundances) per pixel of the image. In this work, the extended blind end-member and abundance extraction (EBEAE) methodology is mathematically formulated to address the blind linear unmixing (BLU) problem subject to positivity constraints in optical measurements. The EBEAE algorithm is based on a constrained quadratic optimization and an alternated least-squares strategy to jointly estimate end-members and their abundances. In our proposal, a local approach is used to estimate the abundances of each end-member by maximizing their entropy, and a global technique is adopted to iteratively identify the end-members by reducing the similarity among them. All the cost functions are normalized, and four initialization approaches are suggested for the end-members matrix. Synthetic datasets are used first for the EBEAE validation at different noise types and levels, and its performance is compared to state-of-the-art algorithms in BLU. In a second stage, three experimental biomedical imaging applications are addressed with EBEAE: m-FLIM for chemometric analysis in oral cavity samples, OCT for macrophages identification in post-mortem artery samples, and hyper-spectral images for in-vivo brain tissue classification and tumor identification. In our evaluations, EBEAE was able to provide a quantitative analysis of the samples with none or minimal a priori information.INDEX TERMS Blind linear unmixing, constrained optimization, fluorescence lifetime imaging microscopy, hyperspectral imaging, optical coherence tomography.
Early detection is critical for improving the survival rate and quality of life of oral cancer patients; unfortunately, dysplastic and early-stage cancerous oral lesions are often difficult to distinguish from oral benign lesions during standard clinical oral examination. Therefore, there is a critical need for novel clinical technologies that would enable reliable oral cancer screening. The autofluorescence properties of the oral epithelial tissue provide quantitative information about morphological, biochemical, and metabolic tissue and cellular alterations accompanying carcinogenesis. This study aimed to identify novel biochemical and metabolic autofluorescence biomarkers of oral dysplasia and cancer that could be clinically imaged using novel multispectral autofluorescence lifetime imaging (maFLIM) endoscopy technologies. In vivo maFLIM clinical endoscopic images of benign, precancerous, and cancerous lesions from 67 patients were acquired using a novel maFLIM endoscope. Widefield maFLIM feature maps were generated, and statistical analyses were applied to identify maFLIM features providing contrast between dysplastic/cancerous vs. benign oral lesions. A total of 14 spectral and time-resolved maFLIM features were found to provide contrast between dysplastic/cancerous vs. benign oral lesions, representing novel biochemical and metabolic autofluorescence biomarkers of oral epithelial dysplasia and cancer. To the best of our knowledge, this is the first demonstration of clinical widefield maFLIM endoscopic imaging of novel biochemical and metabolic autofluorescence biomarkers of oral dysplasia and cancer, supporting the potential of maFLIM endoscopy for early detection of oral cancer.
Deep learning approaches for medical image analysis are limited by small data set size due to multiple factors such as patient privacy and difficulties in obtaining expert labelling for each image. In medical imaging system development pipelines, phases for system development and classification algorithms often overlap with data collection, creating small disjoint data sets collected at numerous locations with differing protocols. In this setting, merging data from different data collection centers increases the amount of training data. However, a direct combination of datasets will likely fail due to domain shifts between imaging centers.In contrast to previous approaches that focus on a single data set, we add a domain adaptation module to a neural network model and train using multiple data sets. Our approach encourages domain invariance between two multispectral autofluorescence imaging (maFLIM) data sets of in vivo oral lesions collected with an imaging system currently in development. The two data sets have differences in the sub-populations imaged and in the calibration procedures used during data collection. We mitigate these differences using a gradient reversal layer and domain classifier. Our final model trained with two data sets substantially increases performance, including a significant increase in specificity. We also achieve a significant increase in average performance over the best baseline model train with two domains (p = 0.0341). Our approach lays the foundation for faster development of computer aided diagnostic systems and presents a feasible approach for creating a single classifier that robustly diagnoses images from multiple data centers in the presence of domain shifts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.