Abstract. The convergence of direct search methods for unconstrained minimization is examined in the case where the underlying method can be interpreted as a grid or pattern search over successively refined meshes. An important aspect of the main convergence result is that translation, rotation, scaling and shearing of the successive grids are allowed.Key words. Grid-based optimization, derivative free optimization, positive basis methods, convergence analysis, multidirectional search. 9]). Most of the current derivative-free algorithms for which convergence results have been established belong to one or more of three categories: line search methods, trust region methods or grid-based methods. In this paper, the convergence of derivative-free methods for unconstrained minimization is examined in the case where the underlying method can be interpreted as a grid or pattern search over successively refined meshes. Therefore, the methods discussed here are similar to those studied in [6], [8], [9], but permit greater freedom in the orientation and scaling of successive grids. Alternative approaches based on trust-regions or line searches can be found in [1], [7] and the references therein. AMSThe properties of grid-based methods are explored and it is shown that convergence can be achieved for a quite general class of algorithm. An important aspect of the main convergence result is that successive grids may be arbitrarily translated, rotated, and sheared relative to one another, and each grid axis may be re-scaled independently of the others. This flexibility allows second-order information to be incorporated into the shape of successive grids, for example by aligning grid axes along conjugate directions or the principal axes of an approximating quadratic. The hope is to construct non-derivative algorithms that possess useful properties of conjugate direction or quasi-Newton algorithms, thus exploiting curvature information without assuming the existence of second derivatives or the availability of first derivatives.We present two optimization frameworks for unconstrained optimization of continuously differentiable functions that are bounded below. For the first framework, in which finite searches are conducted along grid directions of descent, we establish convergence of a subsequence of iterates to a stationary point of the objective function. For the second framework, under the stronger assumption that the algorithm searches the grid direction of locally greatest descent at every iterate, we show that the entire sequence of iterates converges to a stationary point.The restrictions on the grids in our framework are much less severe than for the pattern search methods of [6,9] where a single set of grid axes is used, only rational scalings of grids are permitted, and arbitrary translations are not allowed.
Abstract. The Nelder-Mead algorithm (1965) for unconstrained optimization has been used extensively to solve parameter estimation (and other) problems. Despite its age it is still the method of choice for many practitioners in the fields of statistics, engineering, and the physical and medical sciences because it is easy to code and very easy to use. It belongs to a class of methods which do not require derivatives and which are often claimed to be robust for problems with discontinuities or where the function values are noisy. Recently (1998) it has been shown that the method can fail to converge or converge to non-solutions on certain classes of problems. Only very limited convergence results exist for a restricted class of problems in one or two dimensions. In this paper, a provably convergent variant of the Nelder-Mead simplex method is presented and analysed. Numerical results are included to show that the modified algorithm is effective in practice.
Decision trees are a popular technique in statistical data classification. They recursively partition the feature space into disjoint sub-regions until each sub-region becomes homogeneous with respect to a particular class. The basic Classification and Regression Tree (CART) algorithm partitions the feature space using axis parallel splits. When the true decision boundaries are not aligned with the feature axes, this approach can produce a complicated boundary structure. Oblique decision trees use oblique decision boundaries to potentially simplify the boundary structure. The major limitation of this approach is that the tree induction algorithm is computationally expensive. In this article we present a new decision tree algorithm, called HHCART. The method utilizes a series of Householder matrices to reflect the training data at each node during the tree construction. Each reflection is based on the directions of the eigenvectors from each classes' covariance matrix. Considering axis parallel splits in the reflected training data provides an efficient way of finding oblique splits in the unreflected training data. Experimental results show that the accuracy and size of the HHCART trees are comparable with some benchmark methods in the literature. The appealing feature of HHCART is that it can handle both qualitative and quantitative features in the same oblique split.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.