An entropy-based metric is presented to assess the diversity of solutions in a multi-objective optimization technique. This metric quantifies the 'goodness' of a solution set in terms of its distribution quality over the Pareto-optimal frontier. As a demonstration via a three-objective test example, the entropy metric is used as a means of comparing two multi-objective genetic algorithms.
SUMMARYAmong current meta-modelling approaches, Bayesian-based interpolative methods have received significant attention in the literature. These methods are particularly known for their capability to adapt to the response function behaviour in order to generate good meta-models with fewer experiments. Current Bayesian adaptation techniques, however, are mainly based on the assumption that some variables are more important (or sensitive) than others. These less sensitive variables are weighted less or ignored to reduce the dimension of the design space. This assumption limits the scope and applicability of these models since in many practical cases none of the variables can be completely ignored or weighted less than others. This paper proposes a pragmatic approach that identifies regions of the design space where more experiments are needed based on the response function behaviour. The proposed approach adaptively utilizes the information obtained from previous experiments, builds interim meta-models, and identifies 'irregular' regions in which more experiments are needed. The behaviour of the interim meta-model is then quantified as a spatial function and incorporated into the next stage of the design to sequentially improve the accuracy of the obtained meta-model. The performance of the new approach is demonstrated using a numerical and an engineering example.
The ability to select a design alternative, from a set of feasible alternatives, that is likely to meet customers’ and designer’s preferences and also account for uncertainties is vital to the success of a product design process. This paper presents a new metric, a Customer-based Expected Utility (CEU) metric, for product design selection that accounts for a range of attribute levels (i.e., the customer range) within which customers make purchase decisions. The metric also accounts for designer’s preferences and uncertainty in achieving a desired attribute level (or a combination of attribute levels). The application of the CEU metric is demonstrated by rank-ordering a set of design alternatives for a cordless power tool. Using this metric, design alternatives that fall outside the customer range will yield a relatively low CEU value, while among those that fall in the customer range, the alternatives with a higher value of the designer’s utility yield a higher value of the CEU metric.
An entropy-based metric is presented that can be used for assessing the quality of a solution set as obtained from multi-objective optimization techniques. This metric quantifies the “goodness” of a set of solutions in terms of distribution quality over the Pareto frontier. The metric can be used to compare the performance of different multi-objective optimization techniques. In particular, the metric can be used in analysis of multi-objective evolutionary algorithms, wherein the capabilities of such techniques to produce and maintain diversity among different solution points are desired to be compared on a quantitative basis. An engineering test example, the multi-objective design optimization of a speed-reducer, is provided to demonstrate an application of the proposed entropy metric.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.