In this paper we investigate possible approaches to study general time-inconsistent optimization problems without assuming the existence of optimal strategy. This leads immediately to the need to refine the concept of time-consistency as well as any method that is based on Pontryagin's Maximum Principle. The fundamental obstacle is the dilemma of having to invoke the Dynamic Programming Principle (DPP) in a timeinconsistent setting, which is contradictory in nature. The main contribution of this work is the introduction of the idea of the "dynamic utility" under which the original time inconsistent problem (under the fixed utility) becomes a time consistent one. As a benchmark model, we shall consider a stochastic controlled problem with multidimensional backward SDE dynamics, which covers many existing time-inconsistent problems in the literature as special cases; and we argue that the time inconsistency is essentially equivalent to the lack of comparison principle. We shall propose three approaches aiming at reviving the DPP in this setting: the duality approach, the dynamic utility approach, and the master equation approach. Unlike the game approach in many existing works in continuous time models, all our approaches produce the same value as the original static problem.In this paper we propose some possible approaches to tackle the general time-inconsistent optimization problems in continuous time setting. These approaches are different from all the existing ones in the literature, and are based on our new understanding of the time inconsistency. We note that the time inconsistency appears naturally and frequently in economics and finance, see e.g. and Kahneman-Tversky [20,21]. We refer to the frequently cited survey Strotz [30] for the fundamentals of this problem, and Zhou [34] for some recent development on continuous time models. We should point out that it was [34] that brought the time inconsistency issue to our attention. I. Time inconsistency. We begin by briefly describing the time-inconsistency in an optimization problem that has been understood so far. Consider an optimization problem over a time interval [0, T ]: V 0 := sup u∈U [0,T ] J(u).( 1.1) where U [0,T ] is an appropriate set of admissible controls u defined on [0, T ], and J(u) is a certain utility functional associated to u. Clearly, the problem (1.1) is static. Its dynamic counterpart is the following optimization problem over [t, T ], for any t ∈ [0, T ]:Here U [t,T ] is the corresponding set of admissible controls on [t, T ] and utility functional J t usually involves some conditional expectation, and thus could be random.An admissible control u * ∈ U [0,T ] is called "optimal" for the problem (1.1) if J(u * ) = V 0 .Defining optimal control u t, * for the problem (1.2) similarly and assuming their existence, we say the problem (1.2) is time-consistent if, for any t ∈ [0, T ], it holds thatThe relation (1.3) amounts to saying that a (temporally) global optimum must be a local one.The optimization problem (1.2) is called time-inconsis...