We consider a broad family of control strategies called path-dependent action optimization (PDAO), where every control decision is treated as the solution to an optimization problem with a path-dependent objective function. How well such a scheme works depends on the chosen objective function to be optimized and, in general, it might be difficult to tell, without doing extensive simulation and testing, if a given PDAO design gives good performance or not. We develop a framework to bound the performance of PDAO schemes. We first introduce a general performance bound, in terms of two curvature parameters, for the greedy scheme in the string optimization problems under the condition that the objective function is prefix monotone. Then we show that every PDAO scheme is a greedy scheme for some optimization problem, and if that optimization problem is equivalent to our problem of interest and is provably prefix monotone, then we can say that our PDAO scheme is no worse than a certain factor of optimal. We show how to apply our framework to stochastic optimal control problems to bound the performance of approximate dynamic programming (ADP) schemes. Such schemes are based on approximating the expected value-togo term in Bellman's principle by computationally tractable means. Our framework provides the first systematic approach to bounding the performance of general ADP methods in the stochastic setting.