<p>Visual navigation of mobile robots has become a core capability that enables many interesting applications from planetary exploration to self-driving cars. While systems built on passive cameras have been shown to be robust in well-lit scenes, they cannot handle the range of conditions associated with a full diurnal cycle. Lidar, which is fairly invariant to ambient lighting conditions, offers one possible remedy to this problem. In this paper, we describe a visual navigation pipeline that exploits lidar’s ability to measure both range and intensity (a.k.a., reflectance) information. In particular, we use <em>lidar intensity images</em> (from a scanning-laser rangefinder) to carry out tasks such as <em>visual odometry</em> (VO) and <em>visual teach and repeat</em> (VT&R) in realtime, from full-light to full-dark conditions. This lighting invariance comes at the price of coping with motion distortion, owing to the scanning-while-moving nature of laser-based imagers. We present our results and lessons learned from the last few years of research in this area.</p>