Mobile robots operating in natural terrain need some sort of long-range perception in order to navigate efficiently. Whereas LADAR is a commonly used sensor on such systems, providing range data out to 25 m and beyond, we have instead focused on what information can be extracted from vision. Our robot has only two stereo camera pairs for terrain sensing; they provide reliable stereo data up to 5 m away, but this is not enough to prevent myopic behavior. To overcome this problem, we have developed a novel approach to navigation using monocular imagery by planning a path in the image space. We take a monocular image and apply a learned color-to-cost mapping to transform the raw image into a cost image. Then, after a pseudo-configuration-space transform, we search for a pixel-topixel path from a point in front of the robot to the projected goal point in the cost image. Our implementation has been shown to react to obstacles at a range of 93 m, far beyond Supporting videos (see Section 4) are available in the online issue at www.interscience.wiley.com.