2018 IEEE International Conference of Intelligent Robotic and Control Engineering (IRCE) 2018
DOI: 10.1109/irce.2018.8492943
|View full text |Cite
|
Sign up to set email alerts
|

Pixel-to-Action Policy for Underwater Pipeline Following via Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Learning-based models were used to translate the acquired image into corresponding control commands. [19]. However, the reliabilities of vision-only approaches were dominated by visibility, and the difficulties in acquiring underwater datasets.…”
Section: B Related Workmentioning
confidence: 99%
“…Learning-based models were used to translate the acquired image into corresponding control commands. [19]. However, the reliabilities of vision-only approaches were dominated by visibility, and the difficulties in acquiring underwater datasets.…”
Section: B Related Workmentioning
confidence: 99%
“…DRL combines the perception ability of deep learning with the decision-making ability of reinforcement learning, as shown in Figure 15(b). It can directly control AUV's motion based on the input image to solve the path planning problem of AUV [103]. Cao et al first used sonar imaging to obtain environmental information to establish a grid map of the AUV search area and then adopted the asynchronous advantage actor-critic (A3C) network structure to enable the AUV to learn from its own experience and generate search strategies for various unknown environments.…”
Section: Human-inspired Algorithmsmentioning
confidence: 99%