A vision-aided terrain referenced navigation (VATRN) approach is addressed for autonomous navigation of unmanned aerial vehicles (UAVs) under GPS-denied conditions. A typical terrain referenced navigation (TRN) algorithm blends inertial navigation data with measured terrain information to estimate vehicle’s position. In this paper, a low-cost inertial navigation system (INS) for UAVs is supplemented with a monocular vision-aided navigation system and terrain height measurements. A point mass filter based on Bayesian estimation is employed as a TRN algorithm. Homograpies are established to estimate the vehicle’s relative translational motion using ground features with simple assumptions. And the error analysis in homography estimation is explored to estimate the error covariance matrix associated with the visual odometry data. The estimated error covariance is delivered to the TRN algorithm for robust estimation. Furthermore, multiple ground features tracked by image observations are utilized as multiple height measurements to improve the performance of the VATRN algorithm.
The robust marker tracking and relative navigation algorithms are presented for precise UA V vision-based autonomous landing. To recognize the marker in close-range, the concentric circles are adopted as the marker with ellipse fitting algorithm based on Direct Least Square. We assume that IMU provides vehicle's attitude and altitude so that we consider GPS-denied situation. Also multiple ellipses are used to estimate its center pixel coordinate makes UA V land more accurately. To verify the vision-based relative navigation algorithm we suggest, numerical simulations are obtained by using virtual reality toolbox in MATLABTM. We predetermine the true position and attitude of UA V, and the result of the relative position calculated from vision software including the filter is compared. The simulation results show that the algorithm is robust and very accurate.
A vision-only navigation system is addressed for autonomous navigation of unmanned aircraft with monocular camera as the only sensor. Typical vision-based navigation algorithms blend inertial navigation data with vision measurements and other available information to estimate vehicle's position. In this paper, however, we propose an approach that replaces the inertial navigation system by a translational motion estimate obtained by the camera. This method shares a concept of terrain referenced navigation in that the height measurements by camera are compared to terrain data to construct an observation model. The particle filter based on Bayesian tracking is employed to combine the translation estimate and measured heights of feature points which contain unknown correlation. Furthermore, uncertainty analysis for the proposed navigation scheme is addressed. Numerical simulations are conducted to verify the feasibility of the proposed method. Nomenclature 01 R = rotation of camera between time instances 0 t and 1 t 01 T = translation of camera between time instances 0 t and 1 t 0 ( ) i t p = set of feature points in an image at time instance 0 t P = reconstructed 3D point on ground L x = error covariance matrix of pixel coordinates H X = jacobian matrix of homography matrix with respect to pixel coordinates H L = error covariance matrix of homography matrix T L = error covariance matrix of camera translation R X = jacobian matrix of rotation matrix with respect to euler angles
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.