Several Multi-Criteria-Decision-Making methodologies assume the existence of weights associated with the different criteria, reflecting their relative importance.One of the most popular ways to infer such weights is the Analytic Hierarchy Process, which constructs first a matrix of pairwise comparisons, from which weights are derived following one out of many existing procedures, such as the eigenvector method or the least (logarithmic) squares. Since different procedures yield different results (weights) we pose the problem of describing the set of weights obtained by "sensible" methods: those which are efficient for the (vector-) optimization problem of simultaneous minimization of discrepancies.A characterization of the set of efficient solutions is given, which enables us to assert that the least-logarithmic-squares solution is always efficient, whereas the (widely used) eigenvector solution is not, in some cases, efficient, thus its use in practice may be questionable.
The Big Triangle Small Triangle method has shown to be a powerful global optimization procedure to address continuous location problems. In the paper published in JOGO 37 (2007) 305-319, Drezner proposes a rather general and effective approach for constructing the bounds needed. Such bounds are obtained by using the fact that the objective functions in continuous location models can usually be expressed as a difference of convex functions.In this note we show that, exploiting further the rich structure of such objective functions, alternative bounds can be derived, yielding a significant improvement in computing times, as reported in our numerical experience.
Decision trees are popular Classification and Regression tools and, when small-sized, easy to interpret. Traditionally, a greedy approach has been used to build the trees, yielding a very fast training process; however, controlling sparsity (a proxy for interpretability) is challenging.In recent studies, optimal decision trees, where all decisions are optimized simultaneously, have shown a better learning performance, especially when oblique cuts are implemented. In this paper, we propose a continuous optimization approach to build sparse optimal classification trees, based on oblique cuts, with the aim of using fewer predictor variables in the cuts as well as along the whole tree. Both types of sparsity, namely local and global, are modeled by means of regularizations with polyhedral norms. The computational experience reported supports the usefulness of our methodology. In all our data sets, local and global sparsity can be improved without harming classification accuracy. Unlike greedy approaches, our ability to easily trade in some of our classification accuracy for a gain in global sparsity is shown.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.