In the last few decades, evolutionary algorithms have emerged as a revolutionary approach for solving search and optimization problems involving multiple conflicting objectives. Beyond their ability to search intractably large spaces for multiple solutions, these algorithms are able to maintain a diverse population of solutions and exploit similarities of solutions by recombination. However, existing theory and numerical experiments have demonstrated that it is impossible to develop a single algorithm for population evolution that is always efficient for a diverse set of optimization problems. Here we show that significant improvements in the efficiency of evolutionary search can be achieved by running multiple optimization algorithms simultaneously using new concepts of global information sharing and genetically adaptive offspring creation. We call this approach a multialgorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that merges the strengths of different optimization algorithms. Benchmark results using a set of well known multiobjective test problems show that AMALGAM approaches a factor of 10 improvement over current optimization algorithms for the more complex, higher dimensional problems. The AMALGAM method provides new opportunities for solving previously intractable optimization problems.evolutionary search ͉ multiple objectives ͉ optimization problems ͉ Pareto front E volutionary optimization is a subject of intense interest in many fields of study, including computational chemistry, biology, bioinformatics, economics, computational science, geophysics, and environmental science (1-8). The goal is to determine values for model parameters or state variables that provide the best possible solution to a predefined cost or objective function, or a set of optimal tradeoff values in the case of two or more conflicting objectives. However, locating optimal solutions often turns out to be painstakingly tedious, or even completely beyond current or projected computational capacity (9).Here, we consider a multiobjective minimization problem, with n decision variables (parameters) and m objectives:, where x denotes the decision vector, and y is the objective space. We restrict attention to optimization problems in which the parameter search space X, although perhaps quite large, is bounded: x ϭ (x 1 , . . . , x n ) ʦ X. The presence of multiple objectives in an optimization problem gives rise to a set of Pareto-optimal solutions, instead of a single optimal solution (10, 11). A Pareto-optimal solution is one in which one objective cannot be further improved without causing a simultaneous degradation in at least one other objective. As such, they represent globally optimal solutions to the tradeoff problem.Numerous approaches have been proposed to efficiently find Pareto-optimal solutions for complex multiobjective optimization problems (12-15). In particular, evolutionary algorithms have emerged as the most powerful approach for solving search and optimization pr...