Real-world radiance values span several orders of magnitudes which have to be processed by artificial systems in order to capture visual scenes with a high visual sensitivity. Interestingly, it has been found that similar processing happens in biological systems, starting at the retina level. So our motivation in this paper is to develop a new video tone mapping operator (TMO) based on a synergistic model of the retina. We start from the so-called Virtual Retina model, which has been developed in computational neuroscience. We show how to enrich this model with new features to use it as a TMO, such as color management, luminance adaptation at photoreceptor level and readout from a heterogeneous population activity. Our method works for video but can also be applied to static images (by repeating images in time). It has been carefully evaluated on standard benchmarks in the static case, giving comparable results to the state-of-theart using default parameters, while offering user control for finer tuning. Results on HDR videos are also promising, specifically w.r.t. temporal luminance coherency. As a whole, this paper shows a promising way to address computational photography challenges by exploiting the current research in neuroscience about retina processing.Real-world radiance values span several orders of magnitudes which have to be processed by artificial 2 systems in order to capture visual scenes with a high visual sensitivity. Think about scenes of twilight, day 3 sunlight, the stars at night or the interiors of a house. To capture these scenes, one needs cameras capable 4 of capturing so-called high dynamic range (HDR) images, which are expensive, or via the method proposed 5 by [21], currently implemented in most standard cameras. The problem is how to visualize these images 6 afterwards since standard monitors have a low dynamic range (LDR). Two kinds of solutions exist. The 7 first is technical: there are HDR displays, but they are not affordable for the general public yet. The second 8 is algorithmic: there are methods to compress the range of intensities from HDR to LDR. These methods 9 are called tone mapping operators (TMOs) [84]. TMOs have been developed for both static scenes and 10 videos rather independently. There has been intensive work on static images (see [49, 11] for reviews), with 11 approaches combining luminance adaptation and local contrast enhancement sometimes closely inspired 12 from retinal principles, as in [64, 8, 30, 67] just to cite a few. Recent developments concern video-tone 13 mapping, where a few approaches have been developed so far (see [28, 27] for surveys). 14 Interestingly, in neuroscience it has been found that a similar tone mapping processing occurs in the retina 15 through adaptation mechanisms. This is crucial since the retina must maintain high contrast sensitivity 16 over this very broad range of luminance in natural scenes [86, 34,44]. Adaptation is both global through 17 neuromodulatory feedback loops and local through adaptive gain control mechanisms so ...