2008
DOI: 10.1007/978-3-540-75390-2_4
|View full text |Cite
|
Sign up to set email alerts
|

Rule Extraction from Linear Support Vector Machines via Mathematical Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0
1

Year Published

2008
2008
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 12 publications
0
3
0
1
Order By: Relevance
“…Now it has been proved to be a powerful and promising data classification and function estimation tool. Reference [2,3] applied SVM to data classification. They have obtained some valuable results.…”
Section: Introductionmentioning
confidence: 99%
“…Now it has been proved to be a powerful and promising data classification and function estimation tool. Reference [2,3] applied SVM to data classification. They have obtained some valuable results.…”
Section: Introductionmentioning
confidence: 99%
“…Support vector machines (SVMs) [33] are a class of state-of-the-art classification algorithm that is known to be successful in a wide variety of applications because of their strong robustness, high generalization ability, and low classification error. However, an obvious drawback is that a SVM lacks explanation capability for their results when applied to classification problems, that is, the results obtained from SVM classifiers are not intuitive to humans and are hard to understand [34] . For example, when an unlabeled example is classified by the SVM classifier as positive or negative, the only explanation that can be provided is that the outputs of the variables are lower/higher than some threshold; such an explanation is completely non-intuitive to human experts [34] .…”
mentioning
confidence: 99%
“…However, an obvious drawback is that a SVM lacks explanation capability for their results when applied to classification problems, that is, the results obtained from SVM classifiers are not intuitive to humans and are hard to understand [34] . For example, when an unlabeled example is classified by the SVM classifier as positive or negative, the only explanation that can be provided is that the outputs of the variables are lower/higher than some threshold; such an explanation is completely non-intuitive to human experts [34] . Therefore, it is often difficult to extract rules from a trained SVM classifier.…”
mentioning
confidence: 99%
“…Existem diversos trabalhos que abordam esse tema (Fu, 1991;Hayashi, 1991;Towell and Shavlik, 1993;Craven and Shavlik, 1994b;Fu, 1994;Sethi and Yoo, 1994;Tan, 1994;Thrun, 1994;Tickle et al, 1994;Alexander and Dietterich, 1995;Setiono and Liu, 1995;Thrun, 1995;Craven, 1996;Milaré et al, 1997;Schellharmmer et al, 1997;Tickle et al, 1998;Martineli, 1999;Schmitz et al, 1999;Duch et al, 2000;Behloul et al, 2002;Zhou and Jiang, 2002;Milaré et al, 2002;Núñez et al, 2002;Fung et al, 2008).…”
Section: Conjunto Satimage -mentioning
confidence: 99%
“…Esse trabalho, proposto por Fung et al (2008) descreve um método para a conversão de SVMs lineares em um conjunto de regras sem sobreposição. Nesse método, cada iteração do algoritmo de extração de regrasé formulada como um problema de otimização com restrições.…”
Section: Extração De Conhecimento De Svmsunclassified