2019
DOI: 10.1016/j.neucom.2018.11.094
|View full text |Cite
|
Sign up to set email alerts
|

Boolean kernels for rule based interpretation of support vector machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Besides its theoretical value, this kernels family hugely improves the interpretability of Boolean kernels as shown in [9], and achieves state-of-the-art performance on both classification tasks [26] and on top-N item recommendation tasks [27].…”
Section: Boolean Kernels For Categorical Datamentioning
confidence: 98%
See 1 more Smart Citation
“…Besides its theoretical value, this kernels family hugely improves the interpretability of Boolean kernels as shown in [9], and achieves state-of-the-art performance on both classification tasks [26] and on top-N item recommendation tasks [27].…”
Section: Boolean Kernels For Categorical Datamentioning
confidence: 98%
“…In other words, these kernels compute the number of logical rules of a fixed form over the input variables that are satisfied in both the input vectors. To show that Boolean kernels can hugely improve the interpretability of SVM, in [9] a proof of concept method based on a genetic algorithm has been proposed. This algorithm can extract from the hypothesis of an SVM the most influential features (of the feature space) in the decision.…”
Section: Introductionmentioning
confidence: 99%
“…Feature Coefficient Beyond the interpretations of simple algorithms, researchers have spent substantial efforts to develop model-specific interpretation approaches for models using complex models that are generally considered as not interpretable-by-nature, such as random forest (47), support vector machine (48), and neural networks (49, 50). Although these model-specific interpretation methods allow an easy understanding of model behaviors, the primary limitation is the limited flexibility of these methods.…”
Section: Nomentioning
confidence: 99%
“…The first trend is the rule-based post-explainability, where a group of researchers constructed them using the trained SVM models. For example, Polato and Aiolli (2019) introduced a method to obtain the explanation rules from the trained SVM model using boolean kernels with the feature spaces makeup of logical statements. Moreover, a searching strategy was implemented to extract the most important features/rules that efficiently explained the trained model.…”
Section: Support Vector Machinementioning
confidence: 99%