This paper deals with the error analysis of a novel navigation algorithm that uses as input the sequence of images acquired from a moving camera and a Digital Terrain (or Elevation) Map (DTM/DEM). More specifically, it has been shown that the optical flow derived from two consecutive camera frames can be used in combination with a DTM to estimate the position, orientation and ego-motion parameters of the moving camera. As opposed to previous works, the proposed approach does not require an intermediate explicit reconstruction of the 3D world. In the present work the sensitivity of the algorithm outlined above is studied. The main sources for errors are identified to be the optical-flow evaluation and computation, the quality of the information about the terrain, the structure of the observed terrain and the trajectory of the camera. By assuming appropriate characterization of these error sources, a closed form expression for the uncertainty of the pose and motion of the camera is first developed and then the influence of these factors is confirmed using extensive numerical simulations. The main conclusion of this paper is to establish that the proposed navigation algorithm generates accurate estimates for reasonable scenarios and error sources, and thus can be effectively used as part of a navigation system of autonomous vehicles.
For land use monitoring, the main problems are robust positioning in urban canyons and strong terrain reliefs with the use of GPS system only. Indeed, satellite signal reflection and shielding in urban canyons and strong terrain relief results in problems with correct positioning. Using GNSS-RTK does not solve the problem completely because in some complex situations the whole satellite's system works incorrectly. We transform the weakness (urban canyons and strong terrain relief) to an advantage. It is a vision-based navigation using a map of the terrain relief. We investigate and demonstrate the effectiveness of this technology in Chinese region Xiaoshan. The accuracy of the vision-based navigation system corresponds to the expected for these conditions. It was concluded that the maximum position error based on vision-based navigation is 20m and the maximum angle Euler error based on vision-based navigation is 0.83 degree. In case of camera movement, the maximum position error based on vision-based navigation is 30m and the maximum Euler angle error based on vision-based navigation is 2.2 degrees.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.