A convex optimization-based method is proposed to numerically solve dynamic programs in continuous state and action spaces. This approach using a discretization of the state space has the following salient features. First, by introducing an auxiliary optimization variable that assigns the contribution of each grid point, it does not require an interpolation in solving an associated Bellman equation and constructing a control policy. Second, the proposed method approximates the optimal value function via convex programming with a uniform convergence property in the case of convex optimal value functions. We also propose a design method for a control policy of which performance converges uniformly to the optimum as the grid resolution becomes finer in this case. Third, when a nonlinear control-affine system is considered, the convex optimization approach provides an approximate control policy with a provable suboptimality bound. Fourth, for general cases, the proposed convex formulation of dynamic programming operators can be simply modified as a nonconvex bi-level program, in which the inner problem is a linear program, without losing uniform convergence properties if a globally optimal solution to this bi-level program can be found. From our convex methods and analyses, we observe that convexity in dynamic programming deserves attention as it can play a critical role in obtaining a tractable and convergent numerical solution.