Underwater navigation presents crucial issues because of the rapid attenuation of electronic magnetic waves. The conventional underwater navigation methods are achieved by acoustic equipment, such as the ultra-short-baseline localisation systems and Doppler velocity logs, etc. However, they suffer from low fresh rate, low bandwidth, environmental disturbance and high cost. In the paper, a novel underwater visual navigation is investigated based on the multiple ArUco markers. Unlike other underwater navigation approaches based on the artificial markers, the noise model of the pose estimation of a single marker and an optimal algorithm of the multiple markers are developed to increase the precision of the method. The experimental tests are conducted in the towing tank. The results show that the proposed method is able to localise the underwater vehicle accurately.
Underwater navigation is always a challenging problem, because of electromagnetic attenuation. The traditional methods involve beacons, inertial sensors, and Doppler Velocity Log (DVL), but they have many shortcomings, such as high cost, and lengthy setup time. In order to solve underwater navigation problems at low cost, an integrated visual odometry system has been developed and discussed in this paper. In this method, two inertial sensors provide acceleration and attitude of the vehicle, and an underwater sonar is used to provide the distance between the vehicle and the seabed, whilst in the visual odometry section, an optical flow algorithm has been applied for tracking feature points. With the depth provided by the sonar, 3D position of feature points can be calculated. Linear motion of the vehicle is then predicted through these feature points in dual frames. Finally, nonlinear optimization is used to correct the attitude of the vehicle using visual information. In the proposed algorithm, the vehicle trajectory can be estimated in absolute scale by using a single camera; computational complexity is reduced dramatically compared to other visual odometry methodologies; and this algorithm allows the approach to work in sparse texture conditions. The results from practical experiments demonstrate that the method is effective and it is also a low-cost solution.
Underwater positioning presents a challenging issue, because of the rapid attenuation of electronic magnetic waves, the disturbances and uncertainties in the environment. Conventional methods usually employed acoustic devices to localize Unmanned Underwater Vehicles (UUVs), which suffer from a slow refresh rate, low resolution, and are susceptible to the environmental noise. In addition, the complex terrain can also degrade the accuracy of the acoustic navigation systems. The applications of underwater positioning methods based on visual sensors are prevented by difficulties of acquiring the depth maps due to the sparse features, the changing illumination condition, and the scattering phenomenon. In the paper, a novel visual-based underwater positioning system is proposed based on a Light Detection and Ranging (LiDAR) camera and an inertial measurement unit. The LiDAR camera, benefiting from the laser scanning techniques, could simultaneously generate the associated depth maps. The inertial sensor would offer information about its altitudes. Through the fusion of the data from multiple sensors, the positions of the UUVs can be predicted. After that, the Bundle Adjustment (BA) method is used to recalculate the rotation matrix and the translation vector to improve the accuracy. The experiments are carried out in a tank to illustrate the effects and accuracy of the investigated method, in which the ultra-wideband (UWB) positioning system is used to provide reference trajectories. It is concluded that the developed positioning system is able to estimate the trajectory of UUVs accurately, whilst being stable and robust.
Navigation is a challenging problem in the area of underwater unmanned vehicles, due to the significant electronmagnetic wave attenuation and the uncertainties in underwater environments. The conventional methods, mainly implemented by acoustic devices, suffer limitations such as high cost, terrain effects and low refresh rate. In this paper, a novel low-cost underwater visual navigation method, named Integrated Visual Odometry with a Stereo Camera (IVO-S), has been investigated. Unlike pure visual odometry, the proposed method fuses the information from inertial sensors and a sonar so that it is able to work in context-sparse environments. In practical experiments, the vehicle was operated to follow specific closed-loop shapes. The Integrated Visual Odoemtry with Monocular Camera (IVO-M) method and other popular open source Visual SLAMs (Simultaneous Localisation and Mappings), such as ORB-SLAM2 and VINS-Mono, have been used to provide comparative results. The cumulative error ratio is used as the quantitative evaluation method to analyse the practical test results. It is shown that the IVO-S method is able to work in underwater sparse-feature environments with high accuracy, whilst also being a low cost solution. INDEX TERMSUnderwater navigation, underwater vehicles, visual-inertial odometry, sensor fusion. sonar imaging, and wireless sensor networks at Newcastle University, in 2007, where he is currently a Senior Lecturer of communications and signal processing with the School of Electrical and Electronic Engineering. He has published over 100 conference and journal publications and his work on underwater acoustic communication and positioning has been commercialised by three U.K. companies. His research interests include underwater acoustic signal processing and device design, wireless communication networks, and biomedical instrumentation. ROSE NORMAN (Senior Member, IEEE) received the B.Eng. degree in electrical and electronic engineering from Leeds University, U.K., in 1989, and the M.Sc. and Ph.D. degrees in electrical engineering from Bradford University, U.K., in 1990 and 1994, respectively. She was the Principle Engineer at Switched Reluctance Drives Ltd., in 2004, where she is currently a Senior Lecturer with the School of Engineering. Her research interests include underwater vehicles, marine robotics and automation, and marine applications of data analytics and machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.