Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the eciency in testing solvers for optimization problems and to automatize as much of the procedure as possible.Keywords: test environment, optimization, solver benchmarking, solver comparisonThe testing procedure typically consists of three basic tasks: a) organize test problem sets, also called test libraries; b) solve selected test problems with selected solvers; c) analyze, check and compare the results. The Test Environment is a graphical user interface (GUI) that enables to manage the tasks a) and b) interactively, and task c) automatically.The Test Environment is particularly designed for users who seek to 1. adjust solver parameters, or 2. compare solvers on single problems, or 3. evaluate solvers on suitable test sets.The rst point considers a situation in which the user wants to improve parameters of a particular solver manually, see, e.g., [5]. The second point is interesting in many real-life applications in which a good solution algorithm for a particular problem is sought, e.g., in [10] (all for black box problems). The third point targets general benchmarks of solver software. It often requires a selection of subsets of large test problem sets (based on common characteristics, like similar problem size), and afterwards running all available solvers on these subsets with problem class specic default parameters, e.g., timeout. Finally all tested solvers are compared with respect to some performance measure. In the literature, such comparisons typically exist for black box problems only, see, e.g., [17] for global optimization, or the large online collection [16], mainly for local optimization. Since in many real-life applications models are given as black box functions (e.g., the three examples we mentioned in the last paragraph) it is popular to focus comparisons on this problem class. However, the popularity of modeling languages like AMPL and GAMS, cf.[1], [9], that formulate objectives and constraints algebraically, is increasing. Thus rst steps are made towards comparisons of global solvers using modeling languages, e.g., on the Gamsworld website [11], which oers test sets and tools for comparing solvers with interface to GAMS.One main diculty of solver comparison is to determine a reasonable criterion to measure the performance of a solver. For our comparisons we will count for each solver the number of 2 Ferenc Domes, Martin Fuchs, and Hermann Schichl global solutions found, and the number of wrong and correct claims for the solutions. Here we consider the term global solution as the best solution found among all solvers.A severe showstopper of many current test environments is that it is uncomfortable to use them, i.e., the library and solver management are not very user-friendly, and features like automated L A T E X table creation are missing. Test environments like...