2013
DOI: 10.1007/s10589-013-9560-9
|View full text |Cite
|
Sign up to set email alerts
|

CARTopt: a random search method for nonsmooth unconstrained optimization

Abstract: A random search algorithm for unconstrained local nonsmooth optimization is described. The algorithm forms a partition on R n using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where the objective function f is relatively low, based on previous sampling, from which further samples are drawn directly. Alternating between partition and sampling phases provides an effective method for nonsmooth optimization. The sequence of iterates … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…We extend the oblique DT method used in the CARTopt optimisation algorithm of [17] in a number of ways to de- First, we explain the basic concept of our algorithm for a two class classification problem. The algorithm easily generalises to the multi-class problem.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We extend the oblique DT method used in the CARTopt optimisation algorithm of [17] in a number of ways to de- First, we explain the basic concept of our algorithm for a two class classification problem. The algorithm easily generalises to the multi-class problem.…”
Section: Methodsmentioning
confidence: 99%
“…Axis parallel splits can then be searched in the reflected feature space to find the best split. This split will be oblique in the original feature space [17].…”
Section: Methodsmentioning
confidence: 99%
“…Consider splitting a node t containing p quantitative features and C classes. Each reflected feature space at node t is defined using a Householder matrix (Robertson, Price & Reale ):H=I2false(bold-italicedikfalse)(ebold-italicditalicik)false‖bold-italicebold-italicditalicikfalse‖2,where e is the first column of the p ‐dimensional identity matrix I and d ik is the i th unit scaled eigenvector of the estimated covariance matrix of class k examples at node t . Each feature vector x (column vector) at node t is reflected using H x and the best axis parallel split in the reflected training examples is found.…”
Section: Related Decision Treesmentioning
confidence: 99%
“…Each feature vector x (column vector) at node t is reflected using H x and the best axis parallel split in the reflected training examples is found. Reflecting the feature vectors in this way makes d ik parallel to e and provides a simple and effective way to find oblique splits (Robertson, Price & Reale ; Wickramarachchi et al . ).…”
Section: Related Decision Treesmentioning
confidence: 99%
“…To potentially simplify the CART partition, Robertson et al (2013a) reflect the training data using a Householder matrix…”
Section: Introductionmentioning
confidence: 99%