We develop subgradient-and gradient-based methods for minimizing strongly convex functions under a notion which generalizes the standard Euclidean strong convexity. We propose a unifying framework for subgradient methods which yields two kinds of methods, namely, the Proximal Gradient Method (PGM) and the Conditional Gradient Method (CGM), unifying several existing methods. The unifying framework provides tools to analyze the convergence of PGMs and CGMs for non-smooth, (weakly) smooth, and further for structured problems such as the inexact oracle models. The proposed subgradient methods yield optimal PGMs for several classes of problems and yield optimal and nearly optimal CGMs for smooth and weakly smooth problems, respectively.• Smooth problems. The problems of minimizing continuously differentiable convex functions with Lipschitz continuous gradients.These two classes of convex problems can also be reformulated as structured convex problems, which have been receiving much attention in terms of both theoretical and application aspects. In particular, studies of (sub)gradient-based methods for the class of "smoothable" functions [1,6,9,27,35,36], the class of composite problems [1,5,8,17,18,19,26,38,42,43], and the class of weakly smooth problems [11,12,39,40] are notably important.In this paper, we particularly focus on the following two kinds of (sub)gradient methods: the Proximal (sub)Gradient Method (PGM) and the Conditional Gradient Method (CGM). Both methods may require easy-to-solve subproblems at each iteration.The PGM is executed using a prox-function to define a reasonable proximal operator. Based on the conceptual complexity of Nemirovski and Yudin [32], many important PGMs for the above classes of convex problems can be proposed and their optimal convergence can be achieved. As it will be pointed out in this paper, many of PGMs are modifications, accelerations, and/or combinations of two remarkably important PGMs, namely, the Mirror-Descent Method (MDM) [4,32] and the Dual-Averaging Method (DAM) [37], which are optimal for non-smooth problems.The CGMs, on the other hand, are endowed by subproblems which are linear, i.e., problems of minimizing a linear functional over a bounded convex feasible set. Originating from Frank and Wolfe [15], convergence properties of CGMs are well analyzed (see [10,13,16,27,40,41] and references therein). Because of their advantages such as easiness of subproblems and sparsity of approximate solutions, CGMs are actively studied with applications to machine learning and statistics [9,21,23,24]; it is important to note that the CGMs have worse convergence rates than the PGMs, but the computational cost of each iteration of the former can be lower, compensating the overall cost. Therefore, it is extremely important to choose between the PGM or the CGM depending on the structure of the problem to solve.In a recent work [22], a unifying framework of PGMs were proposed through a unifying treatment of the MDM and the DAM for non-smooth problems, and also for their correspondin...