Background COVID-19 often causes respiratory symptoms, making otolaryngology offices one of the most susceptible places for community transmission of the virus. Thus, telemedicine may benefit both patients and physicians. Objective This study aims to explore the feasibility of telemedicine for the diagnosis of all otologic disease types. Methods A total of 177 patients were prospectively enrolled, and the patient’s clinical manifestations with otoendoscopic images were written in the electrical medical records. Asynchronous diagnoses were made for each patient to assess Top-1 and Top-2 accuracy, and we selected 20 cases to conduct a survey among four different otolaryngologists to assess the accuracy, interrater agreement, and diagnostic speed. We also constructed an experimental automated diagnosis system and assessed Top-1 accuracy and diagnostic speed. Results Asynchronous diagnosis showed Top-1 and Top-2 accuracies of 77.40% and 86.44%, respectively. In the selected 20 cases, the Top-2 accuracy of the four otolaryngologists was on average 91.25% (SD 7.50%), with an almost perfect agreement between them (Cohen kappa=0.91). The automated diagnostic model system showed 69.50% Top-1 accuracy. Otolaryngologists could diagnose an average of 1.55 (SD 0.48) patients per minute, while the machine learning model was capable of diagnosing on average 667.90 (SD 8.3) patients per minute. Conclusions Asynchronous telemedicine in otology is feasible owing to the reasonable Top-2 accuracy when assessed by experienced otolaryngologists. Moreover, enhanced diagnostic speed while sustaining the accuracy shows the possibility of optimizing medical resources to provide expertise in areas short of physicians.
Background Deep learning (DL)–based artificial intelligence may have different diagnostic characteristics than human experts in medical diagnosis. As a data-driven knowledge system, heterogeneous population incidence in the clinical world is considered to cause more bias to DL than clinicians. Conversely, by experiencing limited numbers of cases, human experts may exhibit large interindividual variability. Thus, understanding how the 2 groups classify given data differently is an essential step for the cooperative usage of DL in clinical application. Objective This study aimed to evaluate and compare the differential effects of clinical experience in otoendoscopic image diagnosis in both computers and physicians exemplified by the class imbalance problem and guide clinicians when utilizing decision support systems. Methods We used digital otoendoscopic images of patients who visited the outpatient clinic in the Department of Otorhinolaryngology at Severance Hospital, Seoul, South Korea, from January 2013 to June 2019, for a total of 22,707 otoendoscopic images. We excluded similar images, and 7500 otoendoscopic images were selected for labeling. We built a DL-based image classification model to classify the given image into 6 disease categories. Two test sets of 300 images were populated: balanced and imbalanced test sets. We included 14 clinicians (otolaryngologists and nonotolaryngology specialists including general practitioners) and 13 DL-based models. We used accuracy (overall and per-class) and kappa statistics to compare the results of individual physicians and the ML models. Results Our ML models had consistently high accuracies (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%), equivalent to those of otolaryngologists (balanced: mean 71.17%, SD 3.37%; imbalanced: mean 72.84%, SD 6.41%) and far better than those of nonotolaryngologists (balanced: mean 45.63%, SD 7.89%; imbalanced: mean 44.08%, SD 15.83%). However, ML models suffered from class imbalance problems (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%). This was mitigated by data augmentation, particularly for low incidence classes, but rare disease classes still had low per-class accuracies. Human physicians, despite being less affected by prevalence, showed high interphysician variability (ML models: kappa=0.83, SD 0.02; otolaryngologists: kappa=0.60, SD 0.07). Conclusions Even though ML models deliver excellent performance in classifying ear disease, physicians and ML models have their own strengths. ML models have consistent and high accuracy while considering only the given image and show bias toward prevalence, whereas human physicians have varying performance but do not show bias toward prevalence and may also consider extra information that is not images. To deliver the best patient care in the shortage of otolaryngologists, our ML model can serve a cooperative role for clinicians with diverse expertise, as long as it is kept in mind that models consider only images and could be biased toward prevalent diseases even after data augmentation.
The da Vinci system (da Vinci Surgical System; Intuitive Surgical Inc.) has rapidly developed in several years from the S system to the Si system and now the Xi System. To investigate the surgical feasibility and to provide workflow guidance for the newly released system, we used the new da Vinci Xi system for transoral robotic surgery (TORS) on a cadaveric specimen. Bilateral supraglottic partial laryngectomy, hypopharyngectomy, lateral oropharyngectomy, and base of the tongue resection were serially performed in search of the optimal procedures with the new system. The new surgical robotic system has been upgraded in all respects. The telescope and camera were incorporated into one system, with a digital end-mounted camera. Overhead boom rotation allows multiquadrant access without axis limitation, the arms are now thinner and longer with grabbing movements for easy adjustments. The patient clearance button dramatically reduces external collisions. The new surgical robotic system has been optimized for improved anatomic access, with better-equipped appurtenances. This cadaveric study of TORS offers guidance on the best protocol for surgical workflow with the new Xi system leading to improvements in the functional results of TORS.
To investigate the effect of choline alfoscerate (CA) on hearing amplification in patients with age related hearing loss, we performed a prospective case-control observational study from March 2016 to September 2020. We assessed patients with bilateral word recognition score (WRS) <50% using monosyllabic words. The patients were 65–85 years old, without any history of dementia, Alzheimer’s disease, parkinsonism, or depression. After enrollment, all patients started using hearing aids (HA). The CA group received a daily dose of 800 mg CA for 11 months. We performed between-group comparisons of audiological data, including pure tone audiometry, WRS, HA fitting data obtained using real-ear measurement (REM), and the Abbreviated Profile of Hearing Aid benefit scores after treatment. After CA administration, the WRS improved significantly in the CA group (4.2 ± 8.3%), but deteriorated in the control group (−0.6 ± 8.1%, p = 0.035). However, there was no significant between-group difference in the change in pure tone thresholds and aided speech intelligibility index calculated from REM. These findings suggest that the difference in WRS was relevant to central speech understanding rather than peripheral audibility. Therefore, administering oral CA could effectively enrich listening comprehension in older HA users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.