2022
DOI: 10.3390/machines10070500
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Objective Reinforcement Learning Based Controller for Autonomous Navigation in Challenging Environments

Abstract: In this paper, we introduce a self-trained controller for autonomous navigation in static and dynamic (with moving walls and nets) challenging environments (including trees, nets, windows, and pipe) using deep reinforcement learning, simultaneously trained using multiple rewards. We train our RL algorithm in a multi-objective way. Our algorithm learns to generate continuous action for controlling the UAV. Our algorithm aims to generate waypoints for the UAV in such a way as to reach a goal area (shown by an RG… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(2 citation statements)
references
References 23 publications
(25 reference statements)
0
2
0
Order By: Relevance
“…More similar to the present study, Camci et al [36] utilize a quadrotor with a depth camera for obstacle avoidance but with discrete actions. Dooraki et al [37] also propose a similar application with continuous actions in the position domain. The present research differs by proposing safety boundaries and enabling heading angle steps together with position steps.…”
Section: Related Workmentioning
confidence: 99%
“…More similar to the present study, Camci et al [36] utilize a quadrotor with a depth camera for obstacle avoidance but with discrete actions. Dooraki et al [37] also propose a similar application with continuous actions in the position domain. The present research differs by proposing safety boundaries and enabling heading angle steps together with position steps.…”
Section: Related Workmentioning
confidence: 99%
“…By high-level control, we refer to a trajectory of waypoints or attitudes generated by the controller. High-level control works include [5] where an end-to-end approach using deep RL is used to learn to control three axes of quad-copter when RGB-D image is provided as input.…”
Section: Literature Reviewmentioning
confidence: 99%