Stochastic gradient descent (with a mini-batch) is one of the most common iterative algorithms used in machine learning. While being computationally cheap to implement, recent literature suggests that it may also have implicit regularization properties that prevent overfitting. This paper analyzes the properties of stochastic gradient descent from a theoretical standpoint to help bridge the gap between theoretical and empirical results; in particular, we prove bounds that depend explicitly on the number of epochs. Assuming smoothness, the Polyak-Łojasiewicz inequality, and the bounded variation property, we prove high probability bounds on the convergence rate. Assuming Lipschitz continuity and smoothness, we prove high probability bounds on the uniform stability. Putting these together (noting that some of the assumptions imply each other), we bound the true risk of the iterates of stochastic gradient descent. For convergence, our high probability bounds match existing expected bounds. For stability, our high probability bounds extend the nonconvex expected bound in Hardt et al. (2015). We use existing results to bound the generalization error using the stability. Finally, we put the convergence and generalization bounds together. We find that for a certain number of epochs of stochastic gradient descent, the convergence and generalization balance and we get a true risk bound that goes to zero as the number of samples goes to infinity.Preprint. Under review.