Digestive diseases are a major burden for society and healthcare systems, and with an aging population, the importance of their effective management will become critical. Healthcare systems worldwide already struggle to insure quality and affordability of healthcare delivery and this will be a significant challenge in the midterm future. Wireless capsule endoscopy (WCE), introduced in 2000 by Given Imaging Ltd., is an example of disruptive technology and represents an attractive alternative to traditional diagnostic techniques. WCE overcomes conventional endoscopy enabling inspection of the digestive system without discomfort or the need for sedation. Thus, it has the advantage of encouraging patients to undergo gastrointestinal (GI) tract examinations and of facilitating mass screening programmes. With the integration of further capabilities based on microrobotics, e.g. active locomotion and embedded therapeutic modules, WCE could become the key-technology for GI diagnosis and treatment. This review presents a research update on WCE and describes the state-of-the-art of current endoscopic devices with a focus on research-oriented robotic capsule endoscopes enabled by microsystem technologies. The article also presents a visionary perspective on WCE potential for screening, diagnostic and therapeutic endoscopic procedures.
Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.
Robotic endoscopic systems offer a minimally invasive approach to the examination of internal body structures, and their application is rapidly extending to cover the increasing needs for accurate therapeutic interventions. In this context, it is essential for such systems to be able to perform measurements, such as measuring the distance travelled by a wireless capsule endoscope, so as to determine the location of a lesion in the gastrointestinal (GI) tract, or to measure the size of lesions for diagnostic purposes. In this paper, we investigate the feasibility of performing contactless measurements using a computer vision approach based on neural networks. The proposed system integrates a deep convolutional image registration approach and a multilayer feed-forward neural network in a novel architecture. The main advantage of this system, with respect to the state-of-the-art ones, is that it is more generic in the sense that it is: i) unconstrained by specific models, ii) more robust to non-rigid deformations, and iii) adaptable to most of the endoscopic systems and environments, while enabling measurements of enhanced accuracy. The performance of this system is evaluated in ex-vivo conditions using a phantom experimental model and a robotically-assisted test bench. The results obtained promise a wider applicability and impact in endoscopy in the era of big data.
Background and study aims Capsule endoscopy (CE) is invaluable for minimally invasive endoscopy of the gastrointestinal tract; however, several technological limitations remain including lack of reliable lesion localization. We present an approach to 3D reconstruction and localization using visual information from 2D CE images. Patients and methods Colored thumbtacks were secured in rows to the internal wall of a LifeLike bowel model. A PillCam SB3 was calibrated and navigated linearly through the lumen by a high-precision robotic arm. The motion estimation algorithm used data (light falling on the object, fraction of reflected light and surface geometry) from 2D CE images in the video sequence to achieve 3D reconstruction of the bowel model at various frames. The ORB-SLAM technique was used for 3D reconstruction and CE localization within the reconstructed model. This algorithm compared pairs of points between images for reconstruction and localization. Results As the capsule moved through the model bowel 42 to 66 video frames were obtained per pass. Mean absolute error in the estimated distance travelled by the CE was 4.1 ± 3.9 cm. Our algorithm was able to reconstruct the cylindrical shape of the model bowel with details of the attached thumbtacks. ORB-SLAM successfully reconstructed the bowel wall from simultaneous frames of the CE video. The “track” in the reconstruction corresponded well with the linear forwards-backwards movement of the capsule through the model lumen. Conclusion The reconstruction methods, detailed above, were able to achieve good quality reconstruction of the bowel model and localization of the capsule trajectory using information from the CE video and images alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.