1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat
DOI: 10.1109/ijcnn.1998.687237
|View full text |Cite
|
Sign up to set email alerts
|

A support vector machine approach to decision trees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
96
0
1

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 139 publications
(97 citation statements)
references
References 4 publications
0
96
0
1
Order By: Relevance
“…A few decades ago, a basic SVM approach was used to interrogate the WBCD [81] and 97.2% classification accuracy was obtained. The result has proved the basic SVM approach achieved a high accuracy given quality data in BC.…”
Section: Svmsmentioning
confidence: 99%
“…A few decades ago, a basic SVM approach was used to interrogate the WBCD [81] and 97.2% classification accuracy was obtained. The result has proved the basic SVM approach achieved a high accuracy given quality data in BC.…”
Section: Svmsmentioning
confidence: 99%
“…Both Bennett and Blue [15], and Madzarov et.al. [16] use decision trees to reduce the number of support vectors, in an effort to reduce the computational cost of executing an SVM classifier given the feature values in a multi-class setting.…”
Section: Cost Efficient Svmmentioning
confidence: 99%
“…In this paper, we depart from this by allowing different solutions in different regions of the feature space. The only works we know of that use this idea are [28] and [2]. Especially [2] discusses a method that seems similar to the one presented here as it combines decision trees with support vector machines.…”
Section: Related Workmentioning
confidence: 99%
“…The only works we know of that use this idea are [28] and [2]. Especially [2] discusses a method that seems similar to the one presented here as it combines decision trees with support vector machines. However, the main motivation in [2] is not to reduce the number of kernel evaluations, and the resulting algorithm is a fairly complex combination of gradient descent and tabu search.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation