It has been argued for many years that human subjects recover three dimensional shape cues from the distribution of shading on an object. Many machine vision researchers have interpreted this skill as an indication that the distribution of radiance in regions of an object that do not lie in shadow, can be integrated to yield a dense depth map. Unfortunately, this belief is critically dependent on an extremely simple photometric model, known as the image irradiance equation (see [11] for a clear exposition of this approach).This model is flawed because it assumes that radiance is a function of a purely local geometric property, the surface normal. It ignores the fact that patches of surface reflect light not only to an imaging sensor, but also to other patches of surface (an effect known as "mutual illumination"), making the distribution of radiance a complicated function of the global scene geometry.Unfortunately, it is easy to observe the effects of this redistribution in simple experiments [18,4,16,12,13]. We show some examples in figures 1-3. At first glance, the effects of mutual illumination appear rich in desirable cues to shape and to absolute surface lightness. However, the mathematical complexity of mutual illumination effects means that it is very difficult to see how these cues may be exploited. On the other hand, there is reason to believe that humans can exploit these cues to some extent [6,7].The purpose of this article is to explore the implications of mutual illumination and to consider what shading cues can in fact be used to recover shape information. We show that although radiance itself is not a reliable shape cue, discontinuities in radiance are, as they can appear only at distinguished points on surfaces. Furthermore, the events that cause discontinuities in radiance are geometrically simple.