Background Approximately 90% of global cervical cancer (CC) is mostly found in low- and middle-income countries. In most cases, CC can be detected early through routine screening programs, including a cytology-based test. However, it is logistically difficult to offer this program in low-resource settings due to limited resources and infrastructure, and few trained experts. A visual inspection following the application of acetic acid (VIA) has been widely promoted and is routinely recommended as a viable form of CC screening in resource-constrained countries. Digital images of the cervix have been acquired during VIA procedure with better quality assurance and visualization, leading to higher diagnostic accuracy and reduction of the variability of detection rate. However, a colposcope is bulky, expensive, electricity-dependent, and needs routine maintenance, and to confirm the grade of abnormality through its images, a specialist must be present. Recently, smartphone-based imaging systems have made a significant impact on the practice of medicine by offering a cost-effective, rapid, and noninvasive method of evaluation. Furthermore, computer-aided analyses, including image processing–based methods and machine learning techniques, have also shown great potential for a high impact on medicinal evaluations. Objective In this study, we demonstrate a new quantitative CC screening technique and implement a machine learning algorithm for smartphone-based endoscopic VIA. We also evaluated the diagnostic performance and practicability of the approach based on the results compared to the gold standard and from physicians’ interpretation. Methods A smartphone-based endoscope system was developed and applied to the VIA screening. A total of 20 patients were recruited for this study to evaluate the system. Overall, five were healthy, and 15 were patients who had shown a low to high grade of cervical intraepithelial neoplasia (CIN) from both colposcopy and cytology tests. Endoscopic VIA images were obtained before a loop electrosurgical excision procedure for patients with abnormal tissues, and their histology tissues were collected. Endoscopic VIA images were assessed by four expert physicians relative to the gold standard of histopathology. Also, VIA features were extracted from multiple steps of image processing techniques to find the differences between abnormal (CIN2+) and normal (≤CIN1). By using the extracted features, the performance of different machine learning classifiers, such as k-nearest neighbors (KNN), support vector machine, and decision tree (DT), were compared to find the best algorithm for VIA. After determining the best performing classifying model, it was used to evaluate the screening performance of VIA. Results An average accuracy of 78%, with a Cohen kappa of 0.571, was observed for the evaluation of the system by four physicians. Through image processing, 240 sliced images were obtained from the cervicogram at each clock position, and five features of VIA were extracted. Among the three models, KNN showed the best performance for finding VIA within holdout 10-fold cross-validation, with an accuracy of 78.3%, area under the curve of 0.807, a specificity of 80.3%, and a sensitivity of 75.0%, respectively. The trained model performed using an unprovided data set resulted in an accuracy of 80.8%, specificity of 84.1%, and sensitivity of 71.9%. Predictions were visualized with intuitive color labels, indicating the normal/abnormal tissue using a circular clock-type segmentation. Calculating the overlapped abnormal tissues between the gold standard and predicted value, the KNN model overperformed the average assessments of physicians for finding VIA. Conclusions We explored the potential of the smartphone-based endoscopic VIA as an evaluation technique and used the cervicogram to evaluate normal/abnormal tissue using machine learning techniques. The results of this study demonstrate its potential as a screening tool in low-resource settings.
Since glaucoma is a progressive and irreversible optic neuropathy, accurate screening and/or early diagnosis is critical in preventing permanent vision loss. Recently, optical coherence tomography (OCT) has become an accurate diagnostic tool to observe and extract the thickness of the retinal nerve fiber layer (RNFL), which closely reflects the nerve damage caused by glaucoma. However, OCT is less accessible than fundus photography due to higher cost and expertise required for operation. Though widely used, fundus photography is effective for early glaucoma detection only when used by experts with extensive training. Here, we introduce a deep learning-based approach to predict the RNFL thickness around optic disc regions in fundus photography for glaucoma screening. The proposed deep learning model is based on a convolutional neural network (CNN) and utilizes images taken with fundus photography and with RNFL thickness measured with OCT for model training and validation. Using a dataset acquired from normal tension glaucoma (NTG) patients, the trained model can estimate RNFL thicknesses in 12 optic disc regions from fundus photos. Using intuitive thickness labels to identify localized damage of the optic nerve head and then estimating regional RNFL thicknesses from fundus images, we determine that screening for glaucoma could achieve 92% sensitivity and 86.9% specificity. Receiver operating characteristic (ROC) analysis results for specificity of 80% demonstrate that use of the localized mean over superior and inferior regions reaches 90.7% sensitivity, whereas 71.2% sensitivity is reached using the global RNFL thicknesses for specificity at 80%. This demonstrates that the new approach of using regional RNFL thicknesses in fundus images holds good promise as a potential screening technique for early stage of glaucoma.
Abstract-A non-invasive method for the monitoring of heart activity can help to reduce the deaths caused by heart disorders such as stroke, arrhythmia and heart attack. Human voice can be considered as a biometric data that can be used for estimation of heart rate. In this paper, we propose a method for estimating the heart rate from human speech dynamically using voice signal analysis and by development of a empirical linear predictor model. The correlation between voice signal and heart rate are established by classifiers and prediction of the heart rates with or without emotions are done using linear models. The prediction accuracy was tested using the data collected from 15 subjects, it is about 4050 samples of speech signals and corresponding electrocardiogram samples. The proposed approach can use for early non-invasive detection of heart rate changes that can be correlated to emotional state of the individual and also can be used as tool for diagnosis of heart conditions in real-time situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.