52nd Aerospace Sciences Meeting 2014
DOI: 10.2514/6.2014-0990
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Soaring Using Reinforcement Learning for Trajectory Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 8 publications
0
10
0
Order By: Relevance
“…13 considered the learning problem associated with finding the center of a stationary thermal without turbulence, and used a neural-based algorithm to recover the empirical rules proposed by Reichmann (14) to locate the core of the thermal. Other attempts (15,16) have used neural networks and Q-learning to find strategies to center a turbulence-free thermal. Akos et al (17) show that these simple rules fail even in the presence of modest velocity fluctuations modeled as Gaussian white noise, and express the need for strategies that could work in realistic turbulent flows.…”
mentioning
confidence: 99%
“…13 considered the learning problem associated with finding the center of a stationary thermal without turbulence, and used a neural-based algorithm to recover the empirical rules proposed by Reichmann (14) to locate the core of the thermal. Other attempts (15,16) have used neural networks and Q-learning to find strategies to center a turbulence-free thermal. Akos et al (17) show that these simple rules fail even in the presence of modest velocity fluctuations modeled as Gaussian white noise, and express the need for strategies that could work in realistic turbulent flows.…”
mentioning
confidence: 99%
“…Gudmondsson et al developed a lift seeking sink avoidance algorithm based on a potential flow method and best path search method and simulated flight through mountainous terrain [19]. Reinforcement learning strategies have also been applied to enable a UAV to autonomously harvest energy from the atmosphere [25][26][27] Since this area of work is relatively new, most of the literature simulates soaring methods without flight-testing. However, there are several notable exceptions.…”
Section: Acknowledgmentsmentioning
confidence: 99%
“…A subfield of intelligent control, neurocontrol is characterized by the use of neural networks in control systems. Specifically, the use of reinforcement learning (RL) algorithms and deep neural networks has already been initially explored in the context of dynamic soaring [12][13][14][15] to train neural networks capable of exhibiting generalized and adaptable soaring behavior. That being said, the majority of the existing literature around the use of neural networks in aerial control systems has been focused on fixed-topology networks, where the structure of the nodes and connections are kept constant [16][17][18].…”
Section: Introductionmentioning
confidence: 99%