Autonomy and adaptability are key features in the design and construction of a robotic system capable of carrying out tasks in an unstructured and not predefined environment. Such features are generally observed in animals, biological systems that usually serve as an inspiration models to the design of robotic systems. The autonomy and adaptability of these biological systems partially arises from their ability to learn. Animals learn to move and control their own body when young, they learn to survive, to hunt and avoid undesirable situations, from their progenitors. There has been an increasing interest in defining a way to endow these abilities into the design and creation of robotic systems. This dissertation proposes a mechanism that seeks to create a learning module to a quadruped robot controller that enables it to both, detect and avoid an obstacle in its path. The detection is based on a Forward Internal Model (FIM) trained online to create expectations about the robot's perceptive information. This information is acquired by a set of range sensors that scan the ground in front of the robot in order to detect the obstacle. In order to avoid stepping on the obstacle, the obstacle detections are used to create a map of responses that will change the locomotion according to what is necessary. The map is built and tuned every time the robot fails to step over the obstacle and defines how the robot should act to avoid these situations in the future. Both learning tasks are carried out online and kept active after the robot has learned, enabling the robot to adapt to possible new situations. The proposed architecture was inspired on [14, 17], but applied here to a quadruped robot with different sensors and specific sensor configuration. Also, the mechanism is coupled with the robot's locomotion generator based in Central Pattern Generators (CPG)s presented in [22]. In order to achieve its goal, the controller send commands to the CPG so that the necessary changes in the locomotion are applied. Results showed the success in both learning tasks. The robot was able to detect the obstacle, and change its locomotion with the acquired information at collision time.