Wireless capsule endoscopy (WCE) is an imaging technology that enables close examination of the interior of the entire small intestine. A major problem associated with this new technology is that a large volume of video data need to be examined manually by clinicians. It is therefore useful to design a mechanism that allows the clinicians to gain certain evaluation of a video without watching the whole video. In this paper, a shot detection-based method is presented for automatically establishing the WCE video static storyboard, and then moving storyboard is extracted based on the selected representative frames under the supervision of clinicians. Experimental results show that most of the representative frames containing relevant features can be extracted from the original WCE video. The proposed method can significantly and safely reduce the number of frames that need to be examined by clinicians and thus speed up the diagnosis procedures.
The Wireless capsule endoscopy (WCE) is a state-of-the-art imaging technique used to view the entire gastrointestinal (GI) tract. With this technique, the clinicians can detect diseases such as obscure gastrointestinal bleeding, polyp, Crohn's and celiac disease. However, like other endoscopes, the WCE provides only 2-D images. The real anatomical structure of the observed lesion is unavailable and can only be judged by the clinician's imagination. In this paper, we use shape from shading (SFS) technique to generate 3-D structures from 2-D endoscopic images. For the three assumptions of the SFS technique, we propose a preprocessing method for endoscopic image. Experiments with real WCE data demonstrate good 3-D shape recovery performance. Smooth and visually scene can be created based on the pre-processed endoscopic image, while preserving the structure of the observed objects without any hardware upgrades. The 3-D shape enhances the video and therefore improves the viewing of the gastrointestinal (GI) tract leading to a more accurate diagnosis.Index Terms-Wireless capsule endoscopy, shape recovery, shape from shading.
Automatically detecting objects of interest in videos is a challenging issue since there is no prior knowledge about which objects should be detected and what these objects look like. The objects of interest can be defined as salient ones and the saliency can be measured by surprise theory. Therefore, this paper proposes a new method for automatic object detection. It involves two modules: surprise estimation and object localization. The surprise estimation module first uses the surprise theory to obtain a saliency map which indicates the novelty of each pixel compared with its previous states. The object localization module then determines where the salient objects locate based on the branch-and-bound search algorithm. Experimental results have shown that the objects of interest in videos can be successfully localized by using the proposed automatic detection method.
Removing motion blur caused by camera shake is a tough problem which received much attention in past decades. While, blur removal for the images captured by the camera on humanoid robot is more difficult because of the heavy shaking and unpredictable movement at each pace. To account for this challenging blur problem, we propose a hybrid image deblurring algorithm in this paper. Specifically, the images blurred by robot movement are classified as less blurred and severely blurred by using Just Noticeable Blur Metric (JNBM) as a quantitative criterion. For less blurred images, we propose a maximum a posteriori (MAP) framework by taking advantage of the previous sharp image as reference. For severely blurred images, since most details are lost and hard to recover by deconvolution, we refer to the previous neighboring less blurred images, and directly warp the better deblurred one by SIFT matching as the deblurred result. Experimental results demonstrate the proposed algorithm is superior over the existing methods both qualitatively and quantitatively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.