2005
DOI: 10.1524/itit.2005.47.5_2005.250
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcing the Driving Quality of Soccer Playing Robots by Anticipation (Verbesserung der Fahreigenschaften von fußballspielenden Robotern durch Antizipation)

Abstract: This paper shows how an omnidirectional robot can learn to correct inaccuracies when driving, or even learn to use corrective motor commands when a motor fails, whether partially or completely. Driving inaccuracies are unavoidable, since not all wheels have the same grip on the surface, or not all motors can provide exactly the same power. When a robot starts driving, the real system response differs from the ideal behavior assumed by the control software. Also, malfunctioning motors are a fact of life that we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Stochastic mobility prediction models are typically used in reactive control or control compensation, where predictions, e.g., risk of slip, help to minimize deviation from a reference path (Karumanchi & Iagnemma, ; Helmick et al., ). Techniques have also been proposed for learning such control compensation through experience in the context of self‐modeling (Bongard, Zykov, & Lipson, ; Gloye, Wiesel, Tenchio, & Simon, ). Reactive techniques can compensate for control uncertainty, but they require a reference path to be provided a priori .…”
Section: Related Workmentioning
confidence: 99%
“…Stochastic mobility prediction models are typically used in reactive control or control compensation, where predictions, e.g., risk of slip, help to minimize deviation from a reference path (Karumanchi & Iagnemma, ; Helmick et al., ). Techniques have also been proposed for learning such control compensation through experience in the context of self‐modeling (Bongard, Zykov, & Lipson, ; Gloye, Wiesel, Tenchio, & Simon, ). Reactive techniques can compensate for control uncertainty, but they require a reference path to be provided a priori .…”
Section: Related Workmentioning
confidence: 99%
“…Instead of learning behaviors, ER techniques may be used to directly learn a model of a real mechanical device [11,12,56,88]. Learning techniques can even be used to correct model errors online [33] or even to learn a complete model of the robot in action [13], thus opening the way towards robots able to adapt to motor failures in an online evolution scheme.…”
Section: Reality Gapmentioning
confidence: 99%
“…After Nicod, this line of research was for long time discontinued, until it was reinitiated in the field of artificial intelligence and robotics (Kuipers, 1978;Pierce and Kuipers, 1997). Nowadays, a whole body of work has accumulated describing how robotic agents can build models of themselves and their environments (Kaplan and Oudeyer, 2004;Klyubin et al, 2004Klyubin et al, , 2005Gloye et al, 2005;Bongard et al, 2006;Hersch et al, 2008;Hoffmann et al, 2010;Gordon and Ahissar, 2011;Sigaud et al, 2011;Koos et al, 2013). However, the question of the acquisition of the spatial concepts as something independent of particular sensory coding remains rather poorly studied [however, see Philipona et al (2003), Roschin et al (2011), and Laflaquiere et al (2012)].…”
Section: Introductionmentioning
confidence: 99%