2021
DOI: 10.1002/fld.5025
|View full text |Cite
|
Sign up to set email alerts
|

Learning how to avoid obstacles: A numerical investigation for maneuvering of self‐propelled fish based on deep reinforcement learning

Abstract: The self-propelled fish maneuvering for avoiding obstacles under intelligent control is investigated by numerical simulation. The NACA0012 airfoil is adopted as the two-dimensional fish model. To achieve autonomous cruising of the fish model in a complex environment with obstacles, a hydrodynamics/kinematics coupling simulation method is developed with artificial intelligence (AI) control based on deep reinforcement learning (DRL).The Navier-Stokes (NS) equations in the arbitrary Lagrangian-Eulerian (ALE) fram… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 34 publications
(30 reference statements)
0
6
0
Order By: Relevance
“…Compared to bionic action control tasks and motion control tasks, decision making tasks for bionic underwater robots are more diverse, such as searching [ 69 ], obstacle avoidance [ 115 ], formation control [ 116 , 117 ], and other swarm strategies [ 118 , 119 , 120 ]. The majority of current research on RL-based decision making for bionic underwater robots is conducted in simulation environments.…”
Section: Rl-based Methods In Task Spaces Of Bionic Underwater Robotsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to bionic action control tasks and motion control tasks, decision making tasks for bionic underwater robots are more diverse, such as searching [ 69 ], obstacle avoidance [ 115 ], formation control [ 116 , 117 ], and other swarm strategies [ 118 , 119 , 120 ]. The majority of current research on RL-based decision making for bionic underwater robots is conducted in simulation environments.…”
Section: Rl-based Methods In Task Spaces Of Bionic Underwater Robotsmentioning
confidence: 99%
“…From the perspective of obstacle avoidance tasks, a one-step actor–critic-based obstacle avoidance algorithm for self-propelled fish was designed in [ 115 ], which controls the robot to avoid multiple obstacles. In addition, an interesting water polo ball heading strategy for robotic fish with hybrid fin propulsion was proposed [ 121 ], which decomposes the action and is implemented based on the SAC method.…”
Section: Rl-based Methods In Task Spaces Of Bionic Underwater Robotsmentioning
confidence: 99%
“…Kaiwen et al proposed a new distributed framework for multi-cell collaboration or competitive beamforming, designing limited information exchange schemes to improve global performance [5]. Yan et al studied the problem of self-propelled fish swarm OA maneuver under intelligent control through numerical simulation, and proposed a hydrodynamic simulation method based on DRL and artificial intelligence control to help the potential application of bionic robot swarm in engineering [6]. Zhu et al proposed a real-time robot anti-collision method to improve the overall quality and speed of human-machine cooperation engineering, learned direct control commands from the original depth image through the self-supervised reinforcement learning algorithm, and verified the effectiveness of its algorithm through experiments [7].…”
Section: Related Wordsmentioning
confidence: 99%
“…Colabrese et al [9] showcased the efficacy of reinforcement learning in addressing Zermoles's navigation problem, employing bio-inspired AUVs for vertical navigation within the Arnold-Beltrami-Childress (ABC) flow. Yan [10,11] trained AUVs using DRL to follow predetermined trajectories, and subsequently applied DRL to train bio-inspired AUVs in obstacle avoidance. Zhu et al [12] successfully accomplished point-to-point navigation of fish-like swimmers within vortical flows using DRL.…”
Section: Introductionmentioning
confidence: 99%
“…Such a problem falls under the category of a sparse reward problem, which can involve single or multiple objectives. Previous researchers have used DRL to train bio-inspired AUVs to perform navigation, obstacle avoidance, and other behaviors [9][10][11]. However, these studies have significantly simplified the realworld complex flow field and intelligent fish tasks to reduce both the environment's complexity and the algorithm's solution space.…”
Section: Introductionmentioning
confidence: 99%