Abstract-In this paper, we propose an evolutionary cognitive architecture to enable a mobile robot to cope with the task of visual navigation. Initially a graph based world representation is used to build a map, prior to navigation, through an appearance based scheme using only features associated with color information. During the next step, a genetic algorithm evolves a navigation controller that the robot uses for visual servoing, driving through a set of nodes on the topological map. Experiments in simulation show that an evolved robot, adapted to both exteroceptive and proprioceptive data, is able to successfully drive through a list of sub-goals minimizing the problem of local minima in which evolutionary process can sometimes get trapped. We also show that this approach is more expressive for defining a simplistic fitness formula yet descriptive enough for targeting specific goals.