The performance of enterprise software systems has a direct impact on the success of business. Recent studies have shown that software performance affects customer satisfaction as well as operational costs of software. Hence, software performance constitutes an essential competitive and differentiating factor for software vendors and operators. In industrial practice, it is still a challenging task to detect software performance problems before they are faced by end users. Diagnostics of performance problems requires deep expertise in performance engineering and still entails a high manual effort. As a consequence, performance evaluations are postponed to the last minute of the development process, or even are completely omitted. Instead of proactively avoiding performance problems, problems are fixed in a reactive manner when they first emerge in operations. Since reactive, operation-time resolution of performance problems is very expensive and has a negative impact on the reputation of software vendors, performance problems need to be diagnosed and resolved in the process of software development. Existing approaches addressing performance problem diagnostics either assume the existence of a performance model, are limited to problem detection without analyzing root causes, or are applied as reactive approaches during the operations phase and, thus, cannot be effective applied during development for performance problem diagnostics.In this thesis, we introduce an automatic, experiment-based approach for performance problem diagnostics in enterprise software systems. We describe a method to derive a taxonomy on recurrent types of performance problems and introduce a systematic experimentation concept. Using the taxonomy as a search tree, the proposed approach systematically searches for root causes of detected performance problems by executing series of systematic performance experiments. Based on the measurement data from experiments, detection heuristics decide on the presence of performance problems in the target system. Furthermore, we develop a domain-specific description language to specify the information required for automatic performance problem diagnostics. Finally, we create and evaluate a representative set of detection heuristics. We validate our approach by means of five studies including end-to-end case studies, a controlled experiment and an empirical study. The results of the validation show that our approach is applicable to a wide range of contexts and is able to fully automatically and accurately detect performance problems in medium-size and large-scale applications. External users of the provided approach evaluated it as a useful support for diagnostics of performance problems and exposed their willingness to use the approach for their own software development projects. Explicitly designed for automatic, development-time testing, our approach can be incorporated into continuous integration. In this way, our approach allows regular, automatic diagnostics of performance problems involving minima...