The purpose of this study was to construct a deep learning system for rapidly and accurately screening retinal detachment (RD), vitreous detachment (VD), and vitreous hemorrhage (VH) in ophthalmic ultrasound in real time. Methods:We used a deep convolutional neural network to develop a deep learning system to screen multiple abnormal findings in ophthalmic ultrasonography with 3580 images for classification and 941 images for segmentation. Sixty-two videos were used as the test dataset in real time. External data containing 598 images were also used for validation. Another 155 images were collected to compare the performance of the model to experts. In addition, a study was conducted to assess the effect of the model in improving lesions recognition of the trainees. Results:The model achieved 0.94, 0.90, 0.92, 0.94, and 0.91 accuracy in recognizing normal, VD, VH, RD, and other lesions. Compared with the ophthalmologists, the modal achieved a 0.73 accuracy in classifying RD, VD, and VH, which has a better performance than most experts (P < 0.05). In the videos, the model had a 0.81 accuracy. With the model assistant, the accuracy of the trainees improved from 0.84 to 0.94. Conclusions:The model could serve as a screening tool to rapidly identify patients with RD, VD, and VH. In addition, it also has potential to be a good tool to assist training.Translational Relevance: We developed a deep learning model to make the ultrasound work more accurately and efficiently.
ObjectiveIn order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians’ workload.MethodsA total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman’s membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance.ResultsThe accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman’s membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886.ConclusionA computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.
Background: The efficacy of endoscopic therapy on the long-term survival outcomes of T1b oesophageal cancer (EC) is unclear, this study was designed to clarify the survival outcomes of endoscopic therapy and to construct a model for predicting the prognosis in T1b EC patients. Methods: This study was performed using the Surveillance, Epidemiology, and End Results (SEER) database from 2004 to 2017 of patients with T1bN0M0 EC. Cancer-specific survival (CSS) and overall survival (OS) were compared between endoscopic therapy group, esophagectomy group and chemoradiotherapy group, respectively. Stabilized inverse probability treatment weighting was used as the main analysis method. The propensity score matching method and an independent dataset from our hospital were used as sensitivity analysis. The least absolute shrinkage and selection operator regression (Lasso) was employed to sift variables. A prognostic model was then established and was verified in two external validation cohorts. Results: The unadjusted 5-year CSS was 69.5% (95% CI, 61.5–77.5) for endoscopic therapy, 75.0% (95% CI, 71.5–78.5) for esophagectomy and 42.4% (95% CI, 31.0–53.8) for chemoradiotherapy. After stabilized inverse probability treatment weighting adjustment, CSS and OS were similar in endoscopic therapy and esophagectomy groups (P=0.32, P=0.83), while the CSS and OS of chemoradiotherapy patients were inferior to endoscopic therapy patients (P<0.01, P<0.01). Age, histology, grade, tumour size, and treatment were selected to build the prediction model. The area under the curve of receiver operating characteristics of 1, 3, and 5 years in the validation cohort 1 were 0.631, 0.618, 0.638, and 0.733, 0.683, 0.768 in the validation cohort 2. The calibration plots also demonstrated the consistency of predicted and actual values in the two external validation cohorts. Conclusion: Endoscopic therapy achieved comparable long-term survival outcomes to esophagectomy for T1b EC patients. The prediction model developed performed well in calculating the OS of patients with T1b EC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.