This paper revisits the problem of synthesizing the optimal control law for linear systems with a quadratic cost. For this problem, traditionally, the state feedback gain matrix of the optimal controller is computed by solving the Riccati equation, which is primarily obtained using Calculus of Variations (CoV) and Hamilton-Jacobi-Bellman (HJB) equation based approaches. To obtain the Riccati equation, these approaches requires some assumptions in the solution procedure, i.e. the former approach requires the notion of co-states and then their relationship with states is exploited to obtain the closed form expression for optimal control law, while the latter requires an a-priori knowledge regarding the optimal cost function. In this paper, we propose a novel method for computing linear quadratic optimal control laws by using the global optimal control framework introduced by V.F. Krotov. As shall be illustrated in this article, this framework does not require the notion of co-states and any a-prior information regarding the optimal cost function. Nevertheless, using this framework, the optimal control problem gets translated to a non-convex optimization problem. The novelty of the proposed method lies in transforming the non-convex optimization problem into a convex problem. The insights along with the future directions of the work are presented and gathered at appropriate locations in the article. Finally, numerical results are provided to demonstrate the proposed methodology.GOCP Compute an optimal control law u * (t) which minimizes (or maximizes) the performance index/cost functional:Since, the aforementioned problem corresponds to optimization of the cost functional subject to dynamics of the system considered and possibly constraints on input(s) and/or state(s) the Calculus of Variations(CoV) is generally employed to address optimal control design problems [1,2]. The assumption of an optimal control is usually the first step while using CoV techniques. Subsequently, the conditions which must be satisfied by such an optimal control law are derived. Hence, only necessary conditions are found and sufficiency of these conditions is not guaranteed. Furthermore, the obtained control law is usually only locally optimum. Nevertheless, there are results available in the literature which provide restrictions under which the necessary conditions indeed become sufficient and the global optimal control law is obtained [3,4,5]. Note that, in solving optimal control design problems, the CoV method uses the notion of so-called co-states (which are not actually present in the system). Moreover, in the solution procedure, the existence of a linear relationship between the states and co-states is exploited to compute the closed form of optimal control law (this is particularly true for linear quadratic problems). See [6] for more details.Alongside CoV, another tool, namely dynamic programming (DP) (introduced by Bellman), has also been explored to solve optimal control problems. The application of DP to optimal control design p...