Apart from few exceptions, the mathematical runtime analysis of evolutionary algorithms is mostly concerned with expected runtimes. In this work, we argue that stochastic domination is a notion that should be used more frequently in this area. Stochastic domination allows to formulate much more informative performance guarantees, it allows to decouple the algorithm analysis into the true algorithmic part of detecting a domination statement and the probabilitytheoretical part of deriving the desired probabilistic guarantees from this statement, and it helps finding simpler and more natural proofs.As particular results, we prove a fitness level theorem which shows that the runtime is dominated by a sum of independent geometric random variables, we prove the first tail bounds for several classic runtime problems, and we give a short and natural proof for Witt's result that the runtime of any (µ, p) mutation-based algorithm on any function with unique optimum is subdominated by the runtime of a variant of the (1 + 1) EA on the OneMax function. As sideproducts, we determine the fastest unbiased (1+1) algorithm for the * Extended version of a paper that appeared at EvoCOP 2018 [Doe18a]. This version contains as new material a section on known precise runtime distributions, a Chernoff bound for sums of independent coupon collector runtimes, several new tail bounds for classic runtime results, and a section on counter-examples.LeadingOnes benchmark problem, both in the general case and when restricted to static mutation operators, and we prove a Chernoff-type tail bound for sums of independent coupon collector distributions.