Scan the quick response (QR) code to the left with your mobile device to watch this article's video abstract and others. Don't have a QR code reader? Get one by searching 'QR Scanner' in your mobile device's app store.
With recent breakthroughs in artificial intelligence, computer‐aided diagnosis (CAD) for upper gastrointestinal endoscopy is gaining increasing attention. Main research focuses in this field include automated identification of dysplasia in Barrett's esophagus and detection of early gastric cancers. By helping endoscopists avoid missing and mischaracterizing neoplastic change in both the esophagus and the stomach, these technologies potentially contribute to solving current limitations of gastroscopy. Currently, optical diagnosis of early‐stage dysplasia related to Barrett's esophagus can be precisely achieved only by endoscopists proficient in advanced endoscopic imaging, and the false‐negative rate for detecting gastric cancer is approximately 10%. Ideally, these novel technologies should work during real‐time gastroscopy to provide on‐site decision support for endoscopists regardless of their skill; however, previous studies of these topics remain ex vivo and experimental in design. Therefore, the feasibility, effectiveness, and safety of CAD for upper gastrointestinal endoscopy in clinical practice remain unknown, although a considerable number of pilot studies have been conducted by both engineers and medical doctors with excellent results. This review summarizes current publications relating to CAD for upper gastrointestinal endoscopy from the perspective of endoscopists and aims to indicate what is required for future research and implementation in clinical practice.
Background and Aim
Application of artificial intelligence in medicine is now attracting substantial attention. In the field of gastrointestinal endoscopy, computer‐aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e. predicting the polyp's pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Ultimately, these functions that CAD provides could produce a higher adenoma detection rate and reduce the cost of polypectomy for hyperplastic polyps.
Methods and Results
Currently, research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have >90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection. Prospective studies of in vivo use of artificial intelligence technologies have been reported by several groups, some of which showed a >90% negative predictive value for differentiating diminutive (≤5 mm) rectosigmoid adenomas, which exceeded the threshold for optical biopsy.
Conclusion
We introduce the potential of using CAD for colonoscopy and describe the most recent conditions for regulatory approval for artificial intelligence‐assisted medical devices.
This paper presents a new spatial fully connected tubular network for 3D tubular-structure segmentation. Automatic and complete segmentation of intricate tubular structures remains an unsolved challenge in the medical image analysis. Airways and vasculature pose high demands on medical image analysis as they are elongated fine structures with calibers ranging from several tens of voxels to voxel-level resolution, branching in deeply multiscale fashion, and with complex topological and spatial relationships. Most machine/deep learning approaches are based on intensity features and ignore spatial consistency across the network that are otherwise distinct in tubular structures. In this work, we introduce 3D slice-by-slice convolutional layers in a U-Net architecture to capture the spatial information of elongated structures. Furthermore, we present a novel loss function, coined radial distance loss, specifically designed for tubular structures. The commonly used methods of cross-entropy loss and generalized Dice loss are sensitive to volumetric variation. However, in tiny tubular structure segmentation, topological errors are as important as volumetric errors. The proposed radial distance loss places higher weight to the centerline, and this weight decreases along the the radial direction. Radial distance loss can help networks focus more attention on tiny structures than on thicker tubular structures. We perform experiments on bronchus segmentation on 3D CT images. The experimental results show that compared to the baseline U-Net, our proposed network achieved improvement about 24% and 30% in Dice index and centerline over ratio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.