We investigate in this paper submodular value functions using complex dynamic programming. In complex dynamic programming (dp) we consider concatenations and linear combinations of standard dp operators, as well as combinations of maximizations and minimizations. These value functions have many applications and interpretations, both in stochastic control (and stochastic zero-sum games) as well as in the analysis of (noncontrolled) discrete-event dynamic systems. The submodularity implies the monotonicity of the selectors appearing in the dp equations, which translates, in the context of stochastic control and stochastic games, to monotone optimal policies. Our work is based on the score-space approach of Glasserman and Yao.