Stochastic adaptive dynamics require analytical methods and solution concepts that differ in important ways from those used to study deterministic processes. Consider, for example, the notion of asymptotic stability: in a deterministic dynamical system, a state is locally asymptotically stable if all sufficiently small deviations from the original state are self-correcting. We can think of this as a first step toward analyzing the effect of stochastic shocks, namely, a state is locally asymptotically stable if, after the impact of a onetime, small stochastic shock, the process evolves back to its original state. This idea is not very satisfactory, however, because it treats shocks as if they were isolated events. Economic systems are typically composed of large numbers of interacting agents who are constantly being jostled about by perturbations from a variety of sources. Persistent shocks have substantially different effects than do one-time shocks; in particular, persistent shocks can accumulate and tip the process out of the basin of attraction of an asymptotically stable state. Thus, in a stochastic setting, conventional notions of dynamic stability-including evolutionarily stable strategiesare often inadequate to characterize the long-run behavior of the process. Here we shall outline an alternative approach that is based on the theory of large deviations in Markov processes (Freidlin and Wentzell, 1984; Foster and Young, 1990; Young, 1993a).