Solving many-objective problems (MaOPs) is still a significant challenge in the multi-objective optimization (MOO) field. One way to measure algorithm performance is through the use of benchmark functions (also called test functions or test suites), which are artificial problems with a well-defined mathematical formulation, known solutions and a variety of features and difficulties. In this paper we propose a parameterized generator of scalable and customizable benchmark problems for MaOPs. It is able to generate problems that reproduce features present in other benchmarks and also problems with some new features. We propose here the concept of generative benchmarking, in which one can generate an infinite number of MOO problems, by varying parameters that control specific features that the problem should have: scalability in the number of variables and objectives, bias, deceptiveness, multimodality, robust and non-robust solutions, shape of the Pareto front, and constraints. The proposed Generalized Position-Distance (GPD) tunable benchmark generator uses the position-distance paradigm, a basic approach to building test functions, used in other benchmarks such as Deb, Thiele, Laumanns and Zitzler (DTLZ), Walking Fish Group (WFG) and others. It includes scalable problems in any number of variables and objectives and it presents Pareto fronts with different characteristics. The resulting functions are easy to understand and visualize, easy to implement, fast to compute and their Pareto optimal solutions are known.
Decision making is a complex task and requires a lot of cognitive effort from the decision maker. Multi-criteria methods, especially those based on pairwise comparisons, such as the Analytic Hierarchic Process (AHP), are not viable for large-scale decision-making problems. For this reason, the aim of this paper is to learn the preferences of the decision-maker using machine learning techniques in order to reduce the number of queries that are necessary in decision problems. We used a recently published parameterized generator of scalable and customizable benchmark problems for many-objective problems as a large-scale data generator. The proposed methodology is an iterative method in which a small subset of solutions are presented to the decision-maker to obtain pairwise judgments. This information is fed to an algorithm that learns the preferences for the remaining pairs in the decision matrix. The Gradient Boosting Regressor was applied in a problem with 5 criteria and 210 solutions. Subsets of 5, 7 and 10 solutions were used in each iteration. The metrics MSE, RMSE, MAPE and R2 were calculated. After the 8th iteration the ranking similarity stabilized, as measured by the tau distance. As the main advantage of the proposed approach is that it was necessary only 8 iterations presenting 5 solutions per time to learn the preferences and get an accurate final ranking.
Multi-Objective Optimization (MOO) problems might be subject to many modeling or manufacturing uncertainties that affect the performance of the solutions obtained by a multi-objective optimizer. The decision maker must perform an extra step of sensitivity analysis in which each solution should be verified for its robustness, but this post optimization procedure makes the optimization process expensive and inefficient. In order to avoid this situation, many researchers are developing Robust MOO, where uncertainties are incorporated in the optimization process, which seeks optimal robust solutions. We introduce a coevolutionary approach for robust MOO, without incorporating robustness measures neither in the objective function nor in the constraints. Two populations compete in the environment, one representing solutions and minimizing the objectives, another representing uncertainties and maximizing the objectives in a worst case scenario. The proposed coevolutionary method is a coevolutionary version of MOEA/D. The results clearly suggest that these competing co-evolving populations are able to identify robust solutions to multi-objective optimization problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.