There are shortcomings of binocular endoscope three-dimensional (3D) reconstruction in the conventional algorithm, such as low accuracy, small field of view, and loss of scale information. To address these problems, aiming at the specific scenes of stomach organs, a method of 3D endoscopic image stitching based on feature points is proposed. The left and right images are acquired by moving the endoscope and converting them into point clouds by binocular matching. They are then preprocessed to compensate for the errors caused by the scene characteristics such as uneven illumination and weak texture. The camera pose changes are estimated by detecting and matching the feature points of adjacent left images. Finally, based on the calculated transformation matrix, point cloud registration is carried out by the iterative closest point (ICP) algorithm, and the 3D dense reconstruction of the whole gastric organ is realized. The results show that the root mean square error is 2.07 mm, and the endoscopic field of view is expanded by 2.20 times, increasing the observation range. Compared with the conventional methods, it does not only preserve the organ scale information but also makes the scene much denser, which is convenient for doctors to measure the target areas, such as lesions, in 3D. These improvements will help improve the accuracy and efficiency of diagnosis.
Binocular endoscopy is gradually becoming the future of minimally invasive surgery (MIS) thanks to the development of stereo vision. However, some problems still exist, such as the low reconstruction accuracy, small surgical field, and low computational efficiency. To solve these problems, we designed a framework for real-time dense reconstruction in binocular endoscopy scenes. First, we obtained the initial disparity map using an SGBM algorithm and proposed the disparity confidence map as a dataset to provide StereoNet training. Then, based on the depth map predicted by StereoNet, the corresponding left image of each depth map was input into the Oriented Fast and Brief-Simultaneous Localization and Mapping (ORB-SLAM) framework using an RGB-D camera to realize the real-time dense reconstruction of the binocular endoscopy scene. The proposed algorithm was verified in the stomach phantom and a real pig stomach. Compared with the ground truth, the proposed algorithm’s RMSE is 1.620 mm, and the number of effective points in the point cloud is 834,650, which is a significant improvement in the mapping ability compared with binocular SLAM and ensures the real-time performance of the algorithm while performing dense reconstruction. The effectiveness of the proposed algorithm is verified.
Endoscopic image has complex backgrounds and spatially different noise, bringing mainstream denoising methods to the problem of incomplete noise removal and the loss of image detail. Thus, an endoscopic image denoising algorithm based on spatial attention UNet network is proposed in this paper. UNet based on residual learning is utilized as the backbone network. Spatial attention modules based on noise intensity estimation and edge feature extraction modules are used to remove noise better while preserving the image details and improving generalization ability. We take endoscopic images of real scenes using gastroscopy and compare our method with mainstream methods. Experimental results show that our approach improves PSNR by 3.51 or 2.93 and SSIM by 0.03 or 0.015 compared with CBDNet or EDCNN, respectively. Our method can effectively improve the impact of noise on the image quality of endoscopic images, thus better assisting doctors in diagnosis and treatment.
BackgroundThe possibility of digitizing whole-slide images (WSI) of tissue has led to the advent of artificial intelligence (AI) in digital pathology. Advances in precision oncology have resulted in an increasing demand for predictive assays that enable mining of subvisual morphometric phenotypes and might improve patient care ultimately. Hence, a pathologist-annotated and artificial intelligence-empowered platform for integration and analysis of WSI data and molecular detection data in tumors was established, called PAI-WSIT (http://www.paiwsit.com).MethodsThe standardized data collection process was used for data collection in PAI-WSIT, while a multifunctional annotation tool was developed and a user-friendly search engine and web interface were integrated for the database access. Furthermore, deep learning frameworks were applied in two tasks to detect malignant regions and classify phenotypic subtypes in colorectal cancers (CRCs), respectively.ResultsPAI-WSIT recorded 8633 WSIs of 1772 tumor cases, of which CRC from four regional hospitals in China and The Cancer Genome Atlas (TCGA) were the main ones, as well as cancers in breast, lung, prostate, bladder, and kidneys from two Chinese hospitals. A total of 1298 WSIs with high-quality annotations were evaluated by a panel of 8 pathologists. Gene detection reports of 582 tumor cases were collected. Clinical information of all tumor cases was documented. Besides, we reached overall accuracy of 0.933 in WSI classification for malignant region detection of CRC, and aera under the curves (AUC) of 0.719 on colorectal subtype dataset.ConclusionsCollectively, the annotation function, data integration and AI function analysis of PAI-WSIT provide support for AI-assisted tumor diagnosis, all of which have provided a comprehensive curation of carcinomas pathology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.