Dynamic programming (DP) is a general purpose problem solving methodology based on problem decomposition. The idea is to decompose a “difficult” problem into a family of “related problems”—which are often, but not always, “easier” subproblems of the “difficult” problem. Its plan of attack is then to embed the “difficult” problem of interest—the target problem—in a family of modified problems, and to use a functional equation to relate the solutions to these modified problems to one another. The solution obtained for this functional equation yields the solution to the modified problems as well as to the target problem. This approach has an extremely wide scope of application, but in operations research, DP is used primarily as a framework for the modeling, analysis, and, solution of optimization problems. The evolution of DP has entrenched the convention that DP treats problems as sequential decision problems. This fact is reflected in the DP terminology, notably in the central roles that the concepts “state” and “state transition” play in the theory. The subtle modeling issues associated with this approach have earned DP its reputation (perhaps notoriety) of being an “art.” Although DP's scope of operation is extremely wide‐ranging, in practice its use is often hampered by the curse of dimensionality. This difficulty has spawned the development of a host of methods and techniques aimed at dealing with the computational aspects of DP. This ongoing effort provides challenges and opportunities, especially in the development of general‐purpose DP software.