There has been a surge of works bridging MCMC sampling and optimization, with a specific focus on translating non-asymptotic convergence guarantees for optimization problems into the analysis of Langevin algorithms in MCMC sampling. A conspicuous distinction between the convergence analysis of Langevin sampling and that of optimization is that all known convergence rates for Langevin algorithms depend on the dimensionality of the problem, whereas the convergence rates for optimization are dimension-free for convex problems. Whether a dimension independent convergence rate can be achieved by Langevin algorithm is thus a long-standing open problem. This paper provides an affirmative answer to this problem for large classes of either Lipschitz or smooth convex problems with normal priors. By viewing Langevin algorithm as composite optimization, we develop a new analysis technique that leads to dimension independent convergence rates for such problems.(2004), as well as other Bayesian classification problems Sollich (2002) with Gaussian or Bayesian elastic net priors. The second case corresponds to the regression type problems, where the entire posterior is strongly log-concave and log-Lipschitz smooth. In this case, one can separate the negative log-posterior into two parts:, which is convex and β −1 L-Lipschitz smooth. We therefore directly let g(w) = m 2 w 2 in Section 6.