In real-time rendering, images are computed at a very high speed to make them look like the real scenes. Speed and interactivity play an important role in real-time rendering. However, computing indirect lighting in real time is a major challenge because it is very computationally intensive. To be used in video movies, the rendering program must run very fast and the video engine must operate at a constant frame rate. The images must be noise-free and the animations must remain stable without temporal artifacts. Global lighting in videos is still a real challenge. The rendering program must calculate the entire light exchange between the surfaces of a virtual 3D scene. Offline techniques are capable of rendering photo-realistic images, but none of these algorithms is capable of generating images in real time with a high number of frames per second. In this work, a Deep Spiking Neural Network with Heterogeneous Regularization learning technique is proposed with the aim of building a more biologically plausible network to evaluate the amount of noise and find a stopping criterion for fast realistic illumination. The idea is to improve learning performance by extracting the most relevant features with a deep spiking neural network to achieve high rendering quality. We used a new objective function composed from a supervised term and an unsupervised term. The supervised term enforces the fitting term between the predicted labels and the known labels. The unsupervised term imposes the smoothness of the predicted labels. The experiments was performed on scenes with global illumination containing different image distortions. The proposed model is also compared with the human visual system and other state-of-the-art models. Results show better performance and advantages in terms of efficiency, an increasingly biologically plausible network, and ease of implementation in Neuromorphic hardware.