Background
This study aimed to build and evaluate a deep learning, artificial intelligence (AI) model to automatically classify swallow types based on raw data from esophageal high‐resolution manometry (HRM).
Methods
HRM studies on patients with no history of esophageal surgery were collected including 1,741 studies with 26,115 swallows labeled by swallow type (normal, hypercontractile, weak‐fragmented, failed, and premature) by an expert interpreter per the Chicago Classification. The dataset was stratified and split into train/validation/test datasets for model development. Long short‐term memory (LSTM), a type of deep‐learning AI model, was trained and evaluated. The overall performance and detailed per‐swallow type performance were analyzed. The interpretations of the supine swallows in a single study were further used to generate an overall classification of peristalsis.
Key Results
The LSTM model for swallow type yielded accuracies from the train/validation/test datasets of 0.86/0.81/0.83. The model's interpretation for study‐level classification of peristalsis yielded accuracy of 0.88 in the test dataset. Among model misclassification, 535/698 (77%) swallows and 25/35 (71%) studies were to adjacent categories, for example, normal to weak or normal to ineffective, respectively.
Conclusions and Inferences
A deep‐learning AI model can automatically and accurately identify the Chicago Classification swallow types and peristalsis classification from raw HRM data. While future work to refine this model and incorporate overall manometric diagnoses are needed, this study demonstrates the role that AI will serve in the interpretation and classification of esophageal HRM studies.
Background and study aims Storage of full-length endoscopic procedures is becoming increasingly popular. To facilitate large-scale machine learning (ML) focused on clinical outcomes, these videos must be merged with the patient-level data in the electronic health record (EHR). Our aim was to present a method of accurately linking patient-level EHR data with cloud stored colonoscopy videos.
Methods This study was conducted at a single academic medical center. Most procedure videos are automatically uploaded to the cloud server but are identified only by procedure time and procedure room. We developed and then tested an algorithm to match recorded videos with corresponding exams in the EHR based upon procedure time and room and subsequently extract frames of interest.
Results Among 28,611 total colonoscopies performed over the study period, 21,170 colonoscopy videos in 20,420 unique patients (54.2 % male, median age 58) were matched to EHR data. Of 100 randomly sampled videos, appropriate matching was manually confirmed in all. In total, these videos represented 489,721 minutes of colonoscopy performed by 50 endoscopists (median 214 colonoscopies per endoscopist). The most common procedure indications were polyp screening (47.3 %), surveillance (28.9 %) and inflammatory bowel disease (9.4 %). From these videos, we extracted procedure highlights (identified by image capture; mean 8.5 per colonoscopy) and surrounding frames.
Conclusions We report the successful merging of a large database of endoscopy videos stored with limited identifiers to rich patient-level data in a highly accurate manner. This technique facilitates the development of ML algorithms based upon relevant patient outcomes.
Background: Functional lumen imaging probe (FLIP) Panometry is performed at the time of sedated endoscopy and evaluates esophageal motility in response to distension. This study aimed to develop and test an automated artificial intelligence (AI) platform that could interpret FLIP Panometry studies. Methods: The study cohort included 678 consecutive patients and 35 asymptomatic controls that completed FLIP Panometry during endoscopy and high-resolution manometry (HRM). "True" study labels for model training and testing were assigned by experienced esophagologists per a hierarchical classification scheme. The supervised, deep learning, AI model generated FLIP Panometry heatmaps from raw FLIP data and based on convolutional neural networks assigned esophageal motility labels using a two-stage prediction model. Model performance was tested on a 15% held-out test set (n = 103); the remainder of the studies were utilized for model training (n = 610).Key Results: "True" FLIP labels across the entire cohort included 190 (27%) "normal," 265 (37%) "not normal/not achalasia," and 258 (36%) "achalasia." On the test set, both the Normal/Not normal and the achalasia/not achalasia models achieved an accuracy of 89% (with 89%/88% recall, 90%/89% precision, respectively). Of 28 patients with achalasia (per HRM) in the test set, 0 were predicted as "normal" and 93% as "achalasia" by the AI model.
Conclusions: An AI platform provided accurate interpretation of FLIP Panometry esophageal motility studies from a single center compared with the impression of experienced FLIP Panometry interpreters. This platform may provide useful clinical decision support for esophageal motility diagnosis from FLIP Panometry studies performed at the time of endoscopy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.