One of the fundamental tasks in an autonomous vehicle navigation system is to create a model of the scene around the vehicle. This model, possibly in combination with a predefined map of the environment, provides the basis for obstacle avoidance and route planning. As in most areas of AI, the choice of representation for the model is a key issue [1]. In vehicle navigation, a 3D representation of a scene is often used i.e. points in the scene are represented by 3D position (see, for instance, [2,3,4]). In contrast to this approach, we intend to investigate the possibility of carrying out navigation without explicit 3D information, using quantities which are more directly related to image plane measurements yet which still encode the information required for the navigation task.This idea is undeveloped at the moment. However, it has provided the motivation for the work described here. The purpose of the work is to carry out temporal and stereo correspondence matching in an integrated way. It is certainly true that this integration can be carried out effectively in 3D-based systems [5], but the method in this paper performs the integration without reference to explicit 3D information. The overall approach was suggested by [6], and the main features of the integration are listed below. The current system is limited to straight translational motion in a static scene.• Temporal correspondence matching is used to support the stereo correspondence matching. This approach was adopted for two reasons. Firstly, the the combination of rate of image capture and typical vehicle speed is such that corners are only displaced by small amounts in consecutive images in a sequence so only a small search area is required for temporal matching -in contrast, the search area during stereo matching is along an epipolar line so matching ambiguities are more likely to arise. Secondly, the corner detector used [7] is sensitive to change in viewpoint. This affects the 'corner strength' attribute which is generated to characterise a corner, and hence affects the matching processes. Temporal matching is less affected than stereo matching because the change in viewpoint between two consecutive frames of a sequence is much smaller than the change in viewpoint between the left and right cameras.• Two attributes are used for corner comparison during stereo matching -corner strength and a motion-based attribute.• Stereo matches found in one stereo pair are 'cascaded' forwards to initiate matching in the next stereo pair in the sequence.• Finally, the main subject of the paper is a method by which the optical flow at a corner is used to predict its stereo disparity. Given this prediction, the search area for stereo matching is confined around a point instead of along an epipolar line. Section 1 describes system initialisation, section 2 the basic concepts underlying the main processing, section 3 the main processing itself, and section 4 the results.