Randomized search heuristics like local search, tabu search, simulated annealing or all kinds of evolutionary algorithms have many applications. However, for most problems the best worst-case expected run times are achieved by more problem-specific algorithms. This raises the question about the limits of general randomized search heuristics.Here a framework called black-box optimization is developed. The essential issue is that the problem but not the problem instance is known to the algorithm which can collect information about the instance only by asking for the value of points in the search space. All known randomized search heuristics fit into this scenario. Lower bounds on the black-box complexity of problems are derived without complexity theoretical assumptions and are compared to upper bounds in this scenario. * This work was supported by the Deutsche Forschungsgemeinschaft (DFG) as part of the Collaborative Research Center "Computational Intelligence" (SFB 531).algorithms as well as for randomized algorithms (see, e.g., Cormen, Leiserson, and Rivest (1990) and Motwani and Raghavan (1995)). The criterion of the analysis is the asymptotic (w.r.t. the problem dimension), worst-case (w.r.t. the problem instance) expected (w.r.t. the random bits used by the algorithm) run time of the algorithm. Large lower bounds need some complexity theoretical assumption like NP = P or NP = RP. For almost all well-known optimization problems the best algorithms in this scenario are problem-specific algorithms which use the structure of the problem and compute properties of the specific problem instance.This implies that randomized search heuristics (local search, tabu search, simulated annealing, all kinds of evolutionary algorithms) are typically not considered in this context. They do not beat the highly specialized algorithms in their domain. Nevertheless, practitioners report surprisingly good results with these heuristics. Therefore, it makes sense to investigate these algorithms theoretically. There are theoretical results on local search (Papadimitriou, Schäffer, and Yannakakis (1990)). The analysis of the expected run time of the other search heuristics is difficult but there are some results (see, e.g., Glover and Laguna (1993) for tabu search, Kirkpatrick, Gelatt, and Vecchi (1983) and Sasaki and Hajek (1988) for simulated annealing, and Rabani, Rabinovich, and Sinclair (1998), Wegener (2001), Droste, Jansen, and Wegener (2002 and Giel and Wegener (2003) for evolutionary algorithms). Up to now, there is no "complexity theory for randomized search heuristics" which covers all randomized search heuristics and excludes highly specialized algorithms. Such an approach is presented in this paper.Our approach follows the tradition in complexity theory to describe and analyze restricted scenarios. There are well-established computation models like, e.g., circuits or branching programs (also called binary decision diagrams or BDDs) where one is not able to prove large lower bounds for explicitly defined problems. Therefore...