The UChilel four-legged team is an effort of the Department of Electrical Engineering of the University of Chile in order to foster research in robotics at high level. This document describes the relevant aspects of the UChilel software, which has been developed from scratch and is constantly being updated by our team. The system has shown to be a relatively successful approach; we had an acceptable participation in RoboCup 2003RoboCup and 2006. This year we have improved several aspects of our UChilel software (mainly localization, strategy and actuation). We have also proposed an automated refereeing and analysis tool of robot games. All these aspects are here reported.Index Terms-I. SYSTEM ARCHITECTURE Our software system is divided into four task-oriented modules: vision, localization, strategy and motion control (see figure 1). The vision and motion control modules operate in each robot locally. The localization module is distributed, it operates in each robot, and a global estimate of the overall ball localization is generated in a distributed fashion. The strategy module is also distributed, and allows the sharing of global information among the robots. In our current implementation these four task-oriented modules run, in each robot, in two different computer processes: one for vision and localization that we call the perceptual process Manuscript module, and one for motion control and strategy that we call the engine process module. The reason for having just two processes and not four is the synchronization problems we had in our previous implementation. In the next sections each of the mentioned modules are described.
IIVISION MODULE
A.Beacons, Goals and Ball PerceptionLandmarks (beacons and goals) and ball are perceived using a color based vision method (most of the RoboCup fourlegged teams use similar vision systems). Robot relative distances and angles are estimated using a segmented image that is built using a look up table and the a priori knowledge of the field objects colors. Our color-based vision system is detailed described in D and in our technical report 2005 D. Fig. 1. Modular organization of our system. In the bottom the low-level processes of vision and motion control. On top the high level processes of localization and strategy.
B. Visual Sonar and Lines PerceptionUsing the idea of the visual sonar, images can be analyzed very quickly using scan virtual rays (see figure 2); lines perpendicular to these scan rays can be easily found. Thus, in order to detect a pixel that lies on a line, we check the difference in the Y channel of adjacent pixels in the scan rays. The Y channel describes the luminance of the pixels in a YUV image; strong variations of Y channel indicate a transition from any dark color to white. The image is crossed using a grid of lines perpendiculars to the image horizon. The horizon of the image is obtained by compensating the camera rotation and inclination using the measures of the robot encoders (head 200
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.