This paper analyzes a connection between risk-sensitive and minimax criteria for discrete-time, nite-state Markov Decision Processes MDPs. We synthesize optimal policies with respect to both criteria, both for nite horizon and discounted in nite horizon problems. A generalized decision-making framework is introduced, leading to stationary risk-sensitive and minimax optimal policies on the in nite horizon with discounted costs. We introduce the mixed risk-neutral minimax objective, and utilize results from risk-neutral and minimax control to derive an information state process and dynamic programming equations for the value function. We synthesize optimal control laws both on the nite and in nite horizon, and establish the e ectiveness of the controller as a tool to trade o risk-neutral and minimax objectives.