Abstract. This paper describes the mechanical and electrical design, as well as the control strategy, of the FU-Fighters robots, a F180 league team that won the second place at RoboCup'99. It explains how we solved the computer vision and radio communication problems that arose in the course of the project. The paper mainly discusses the hierarchical control architecture used to generate the behavior of individual agents and the team. Our reactive approach is based on the Dual Dynamics framework developed by H. Jäger, in which activation dynamics determines when a behavior is allowed to influence the actuators, and a target dynamics establishes how this is done. We extended the original framework by adding a third module, the perceptual dynamics. Here, the readings of fast changing sensors are aggregated temporarily to form complex, slow changing percepts. We describe the bottom-up design of behaviors and illustrate our approach using examples from the RoboCup domain.
This paper shows how an omnidirectional robot can learn to correct inaccuracies when driving, or even learn to use corrective motor commands when a motor fails, whether partially or completely. Driving inaccuracies are unavoidable, since not all wheels have the same grip on the surface, or not all motors can provide exactly the same power. When a robot starts driving, the real system response differs from the ideal behavior assumed by the control software. Also, malfunctioning motors are a fact of life that we have to take into account. Our approach is to let the control software learn how the robot reacts to instructions sent from the control computer. We use a neural network, or a linear model for learning the robot's response to the commands. The model can be used to predict deviations from the desired path, and take corrective action in advance, thus increasing the driving accuracy of the robot. The model can also be used to monitor the robot and assess if it is performing according to its learned response function. If it is not, the new response function of the malfunctioning robot can be learned and updated. We show, that even if a robot loses power from a motor, the system can relearn to drive the robot in a straight path, even if the robot is a black-box and we are not aware of how the commands are applied internally.Zusammenfassung In diesem Artikel zeigen wir, wie ein vierrädriger holonomer Roboter Ungenauigkeiten beim Fahren korrigieren kann, indem die Regelungswerte für die Motoren geeignet modifiziert werden. Diese Verfahren arbeiten auch im Falle eines Motordefekts. Die Steuerung wird automatisch so angepasst, dass mit nur drei Motoren gefahren werden kann. Eine unpräzise Steuerung ist im Allgemeinen nicht zu vermeiden, denn die Räder können eine unterschiedliche Haftreibung mit dem Untergrund haben, Motoren haben durch Fertigungstoleranzen und Verschleiss unterschiedliche Leistungsmerkmale. Außerdem sind die Modelle zur Steuerung der Motoren oft so einfach gehalten, dass sie nicht exakt die physikalischen Eigenschaften widerspiegeln. Manchmal fallen Motoren aus, dies sollte ebenfalls berücksich-tigt werden. Bei unserem Ansatz lernt die Steuerungssoftware, wie der Roboter auf unterschiedliche Befehle reagiert und passt die Befehle dementsprechend an, bevor sie an den Roboter übergeben werden. Dafür benutzen wir ein neuronales Netz oder ein lineares Modell, um die Bewegung des Roboters vorherzusagen. Die Abweichungen der tatsächlichen Bewegung bezüglich der gewünschten Bewegung kann dann benutzt werden, um geeignetere Befehle zu wählen. Dies verbessert die Fahrweise des Roboters erheblich. Das gelernte Modell kann außerdem dazu genutzt werden, um den Zustand des Roboters fortlaufend zu testen. Falls eine zu große Abweichung zur Wirklichkeit besteht, kann ein anderes Modell gewählt werden oder ein neues gelernt werden. Wenn ein Motor eines Roboters an Leistung verliert, kann für diesen Roboter eine spezielle Vorhersage trainiert werden, damit der Roboter wieder akkurat fährt. Gerade wenn wenig Informa...
We show how to apply learning methods to two robotics problems, namely the optimization of the on-board controller of an omnidirectional robot, and the derivation of a model of the physical driving behavior for use in a simulator.We show that optimal control parameters for several PID controllers can be learned adaptively by driving an omni directional robot on a field while evaluating its behavior, using an reinforcement learning algorithm.After training, the robots can follow the desired path faster and more elegantly than with manually adjusted parameters.Secondly, we show how to learn the physical behavior of a robot. Our system learns to predict the position of the robots in the future according to their reactions to sent commands. We use the learned behavior in the simulation of the robots instead of adjusting the physical simulation model whenever the mechanics of the robot changes. The updated simulation reflects then the modified physics of the robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.