The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS) are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper.
Autonomous underwater vehicles are a prominent tool for underwater exploration because they can access dangerous places avoiding the risks for the human beings.However, the autonomous navigation still a challenge due to the characteristics of the environment that decrease the performance of the sensor and the robot perception.In this context, this paper proposes a loop closure detector addressed to the simultaneous localization and mapping problem at semistructured environments using acoustic images acquired by forward-looking sonars. The images are segmented by an adaptative approach based on the acoustic beams analysis. A pose-invariant topological graph is build to represent the relationship between image features. The loop closure detection is achieved using a graph comparison. The approach is evaluated in a real environment at a marina. The results reveal all loop closures of the data set are detected with a high precision and present an invariant to image rotation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.