2012
DOI: 10.4031/mtsj.46.2.5
|View full text |Cite
|
Sign up to set email alerts
|

A Behavior-Based Mission Planner for Cooperative Autonomous Underwater Vehicles

Abstract: Due to its applications in marine research, oceanographic, and undersea exploration, Autonomous Underwater Vehicles (AUVs) and the related control algorithms has been recently under intense investigation. In this work, we address target detection and tracking issues, proposing a control strategy which is able to benefit from the cooperation among robots within the fleet. In particular, we introduce a behavior-based planner for cooperative AUVs, proposing an algorithm able to search and recognize targets, in bo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 27 publications
(36 reference statements)
0
6
0
Order By: Relevance
“…The main contributions of this paper are as follows: 1) Compared with the coordinated control methods in [3], [6]- [9], [35], a distributed event-triggered method is proposed, in which the control instructions are updated based on the auxiliary state and distributed eventtriggered conditions, rather than updated at uniform time intervals based on the sampling time. By adopting the proposed control strategy, the energy consumption is greatly reduced.…”
Section: Introduction a Aims And Motivationmentioning
confidence: 99%
“…The main contributions of this paper are as follows: 1) Compared with the coordinated control methods in [3], [6]- [9], [35], a distributed event-triggered method is proposed, in which the control instructions are updated based on the auxiliary state and distributed eventtriggered conditions, rather than updated at uniform time intervals based on the sampling time. By adopting the proposed control strategy, the energy consumption is greatly reduced.…”
Section: Introduction a Aims And Motivationmentioning
confidence: 99%
“…In the field of neural networks research, it is often suggested that neural networks based on associative learning laws can model the mechanisms of classical conditioning, while neural networks based on reinforcement learning laws can model the mechanisms of operant conditioning [29,32]. The reinforcement learning is used to acquire navigation skills for autonomous vehicles and updates both the vehicle model and optimal behaviour at the same time [24,[33][34][35][36][37][38].…”
Section: Autonomous Navigation With Obstacle Avoidance Using Neural Nmentioning
confidence: 99%
“…The SODMN is a kinematic adaptive neuro-controller and a real-time, unsupervised neural network that learns to control autonomous underwater and surface vehicles in a nonstationary environment. The SODMN combines associative learning and Vector Associative Map (VAM) learning [24,28,[36][37][38] to generate transformations between spatial and velocity coordinates. The transformations are learned in an unsupervised training phase, during which the vehicle moves as a result of randomly selected velocities of its actuators.…”
Section: Autonomous Navigation With Obstacle Avoidance Using Neural Nmentioning
confidence: 99%
“…This vehicle could independently create an intelligent choice about its next movement location using its neighbors' local information and on-board intelligence without human guidance [4]. The benefits of exploiting a group of AUVs as opposed to single application, become evident when considering performance, costs, fault tolerance and re-configurability [5]. It is prohibitive to use a central controller to guide AUVs' behavior due to ever changing, unknown environmental conditions, limited bandwidth and lossy communication mediums [6].…”
Section: Introductionmentioning
confidence: 99%