2007
DOI: 10.1109/robot.2007.363075
|View full text |Cite
|
Sign up to set email alerts
|

Gaussian Processes and Reinforcement Learning for Identification and Control of an Autonomous Blimp

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
157
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 153 publications
(157 citation statements)
references
References 11 publications
0
157
0
Order By: Relevance
“…In recent years, GP dynamics models were more often used for learning robot dynamics [9,10,14]. However, they are usually not used for long-term planning and policy learning, but rather for myopic control and trajectory following.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, GP dynamics models were more often used for learning robot dynamics [9,10,14]. However, they are usually not used for long-term planning and policy learning, but rather for myopic control and trajectory following.…”
Section: Related Workmentioning
confidence: 99%
“…However, they are usually not used for long-term planning and policy learning, but rather for myopic control and trajectory following. Typically, the training data for the GP dynamics models are obtained either by motor babbling [9] or by demonstrations [14]. For the purpose of data-efficient fully autonomous learning, these approaches are not suitable: Motor babbling is data-inefficient and does not guarantee good models along a good trajectory; demonstrations would contradict fully autonomous learning.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the limited space, we simply refer to some approaches which are closely related to the experiments performed in this paper. Ko et al [6] presented an approach to improve a motion model of a blimp derived from aeronautic principles by using a GP to model the residual. Furthermore, Rottmann et al [7] and Deisenroth et al [8] learned control policies of a completely unknown system in a GP framework.…”
Section: Related Workmentioning
confidence: 99%
“…The approach that is most closely related to the one described in this paper has recently been presented by Ko et al [11]. They also deal with the problem of learning to control an autonomous blimp and choose a similar set of methods for this task.…”
Section: Related Workmentioning
confidence: 99%