Operation in minimally invasive surgery is more difficult since the surgeons perform operations without haptic feedback or depth perception. Moreover, the field of view perceived by the surgeons through endoscopy is usually quite limited. The goal of this paper is to allow surgeons to see wide-angle images from endoscopy without the drawback of lens distortion. The proposed distortion correction process consists of lens calibration and real-time image warping. The calibration step is to estimate the parameters in the lens distortion model. We propose a fully automatic Hough-entropy-based calibration algorithm, which provides calibration results comparable to the previous manual calibration method. To achieve real-time correction, we use graphics processing unit to warp the image in parallel. In addition, surgeons may adjust the focal length of a lens during the operation. Real-time distortion correction of a zoomable lens is impossible by using traditional calibration methods because the tedious calibration process has to repeat again if focal length is changed. We derive a formula to describe the relationship between the distortion parameter, focal length, and image boundary. Hence, we can estimate the focal length for a zoomable lens from endoscopic images online and achieve real-time lens distortion correction.
Wide-angle cameras have been widely used in surveillance and endoscopic imaging. An automatic distortion correction method is very useful for these applications. Traditional methods extract corners or curved straight lines for estimating distortion parameters. Hough transform is a powerful tool to assess straightness. However, previous methods usually require some human intervention or only focus on using a single curve. In this paper, we propose a new method based on Hough transform by considering all curves into the estimation of distortion parameters. By considering the relationship between distortion parameters and curves, our method is fully automatic and does not require manual selection of curves in an image. Experiments on synthetic and real datasets have been conducted. The results of our method are also compared with other Hough Transform based methods in quantitative measures. The experimental results show that the accuracy of the proposed automatic method is comparable to those of other manual linebased methods.
In this paper, we propose a new technique for compensating radial and perspective distortions of photos acquired with wide-angle lens by using facial features detected from the images without using predefined calibration patterns. The proposed algorithm utilizes a statistical facial feature model to recover radial distortion and the facial features are further used for adaptive cylindrical projection which will reduce perspective distortion near the image boundary. Our algorithm has several advantages over the traditional methods. First, traditional calibration patterns, like man-made straight buildings, chessboards, or calibration cubes, are not required in our method. Even though the radial distortion can be corrected by several conventional methods, most of them usually produce photos with larger perspective distortion for faces compared to our method. The system is composed of four components: offline training of the statistical facial feature model, feature point extraction from distorted faces, estimation of radial distortion parameters and compensation of radial distortion, and adaptive cylindrical projection. In order to estimate the distortion parameters, we propose an energy considering the fitness between the undistorted coordinates of the facial feature points extracted from the input distorted image and the learned statistical facial feature model. Given the distortion parameters, the fitness is calculated by solving a linear least squares system. The distortion parameters that minimize the cost function are searched in a hierarchical manner. Experimental results demonstrate the distortion reduction in the corrected images by using the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.