How does the brain combine information across the eyes? In primary visual cortex, spatial patterns are subject to "ocularity invariance", where monocular and binocular responses are rendered equal by mutual suppression between the eyes. Here we asked whether this invariance holds for pure changes in luminance. We collected steady-state EEG and pupillometry data simultaneously, and find strong deviations from ocularity invariance both in the cortex and also in the subcortical pathway that determines pupil diameter. In cortex, we find strong binocular facilitation, and negligible interocular suppression from cortical sources, whereas measurements of pupil diameter showed weaker facilitation and stronger suppression. This was not purely a consequence of the temporal stimulus parameters; pronounced binocular facilitation was also observed at faster flicker rates. Near-linear binocular combination was also found for the same stimuli using a perceptual matching task. A hierarchical Bayesian implementation of a standard binocular combination model confirms that interocular suppression was substantially weaker for our EEG and matching data compared with the pupillometry results. These findings illustrate that ocularity invariance is not a ubiquitous feature of visual processing, and that the brain can repurpose a generic algorithm for different functions by adjusting parameters such as the weight of suppression between pathways.