2016 IEEE International Conference on Imaging Systems and Techniques (IST) 2016
DOI: 10.1109/ist.2016.7738202
|View full text |Cite
|
Sign up to set email alerts
|

Robotic validation of visual odometry for wireless capsule endoscopy

Abstract: Wireless capsule endoscopy (WCE) is the prime diagnostic modality for the small-bowel. It consists in a swallowable color camera that enables the visual detection and assessment of abnormalities, without patient discomfort. The localization of the capsule is currently performed in the 3D abdominal space using radiofrequency (RF) triangulation. However, this approach does not provide sufficient information for the localization of the capsule, and therefore for the localization of the detected abnormalities, wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…Parameters c x and c y are the coordinates of the principal point of the camera (optical center), in the x and y dimensions expressed in pixel units, and factor a is the skew coefficient which is non-zero if the axes of the image are not perpendicular. In [20] we showed that Kannala and Brand's calibration method [16] can result in slightly better results than Zhang's method [14], but it requires prior knowledge about the focal length of the camera. In order to minimize the dependence from camera parameters, the Zhang's method was used [14], as implemented in Bouguet's calibration toolbox [15].…”
Section: Parametric Vomentioning
confidence: 96%
See 1 more Smart Citation
“…Parameters c x and c y are the coordinates of the principal point of the camera (optical center), in the x and y dimensions expressed in pixel units, and factor a is the skew coefficient which is non-zero if the axes of the image are not perpendicular. In [20] we showed that Kannala and Brand's calibration method [16] can result in slightly better results than Zhang's method [14], but it requires prior knowledge about the focal length of the camera. In order to minimize the dependence from camera parameters, the Zhang's method was used [14], as implemented in Bouguet's calibration toolbox [15].…”
Section: Parametric Vomentioning
confidence: 96%
“…However, the displacement estimation was possible only in a relative scale, and not in physical units. Recently we proposed a VO methodology that was able to perform the displacement estimation in the 3D coordinate system, in physical units [20]. The mean absolute errors (MAEs) achieved in that study, for the estimation of the distance covered by the CE was 7.2 ± 1.4 cm.…”
Section: Related Workmentioning
confidence: 96%
“…Development in image processing and deep learning have provided another framework for localization of the endoscopy capsule. It has been demonstrated that, based on geometrical models, pure visual aided localization can be performed in vitro [28][29][30][31][32]. In particular Wahid et al [19] and Bao et al [20] provided a simple geometrical approximation to the colon.…”
Section: Introductionmentioning
confidence: 99%
“…The results lead to the conclusion that there is only a marginal difference, both from a quantitative and qualitative perspective, in terms of SOD performance, between the utilization of predicted and sensor-based estimated depth. MonoSOD can be beneficial for robotic applications where the installation of sensor-based depth acquisition methods is difficult due to the design requirements of the robot, e.g., in robotic capsule endoscopes (Ciuti et al, 2016).…”
Section: Discussionmentioning
confidence: 99%