Real world robot tasks are so complex that it is hard to hand-tune all of the domain knowledge, especially to model the dynamics of the environment. Several research efforts focufl on applying machine learning to map learning, sensor/action mapping, and vision. The work presented in this paper explores machine learning techniques for robot planning. The goal is to use real robotic navigational execution as a data source for learning.Our system collects execution traces, and extracts relevant information to improve the efficiency of generated plnns, In this article, we present the representation of the path plnnner nnd the navigation modules, and describe the execution trace. We show how training data is extracted from the execution trace,We introduce the concept of situation-dependent costs, where situational features can be attached to the costs used hy the p&h planner. In this way, the planner can generate pAdIf that are appropriate for a given situation. We present experimental results from a simulated, controlled environment as well as from data collected from the actual robot.