2022
DOI: 10.3390/machines10070571
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features

Abstract: The underwater docking of autonomous underwater vehicles (AUVs) is conducive to energy supply and data exchange. A vision-based high-precision estimation of “the positions and poses of an AUV relative to a docking station” (PPARD) is a necessary condition for successful docking. Classical binarization methods have a low success rate in extracting guidance features from fuzzy underwater images, resulting in an insufficient stability of the PPARD estimation. Based on the fact that guidance lamps are blue strong … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Experimental results demonstrate that the GQPSO algorithm achieves higher matching precision and speed, especially when the number of iterations exceeds 40, achieving a matching precision of up to 100%. Lv et al [41] present an adaptive visual feature extraction method tailored to guide lights with strong blue point light sources. It enhances guide images to emphasize these features and estimates their position and orientation by minimizing imaging errors.…”
Section: Traditional Feature Extraction Algorithmsmentioning
confidence: 99%
“…Experimental results demonstrate that the GQPSO algorithm achieves higher matching precision and speed, especially when the number of iterations exceeds 40, achieving a matching precision of up to 100%. Lv et al [41] present an adaptive visual feature extraction method tailored to guide lights with strong blue point light sources. It enhances guide images to emphasize these features and estimates their position and orientation by minimizing imaging errors.…”
Section: Traditional Feature Extraction Algorithmsmentioning
confidence: 99%