2015
DOI: 10.1007/978-3-319-23525-7_9
|View full text |Cite
|
Sign up to set email alerts
|

Opening the Black Box: Revealing Interpretable Sequence Motifs in Kernel-Based Learning Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 26 publications
0
10
0
Order By: Relevance
“…From the implicitly learned patterns, predictions, prescriptions, and classifications can be made. Research has attempted to understand black boxes by identifying the internal structures responsible for predictions (Vidovic et al, 2015). This is considered to be a first step towards interpreting implicit patterns.…”
Section: From Patterns To Theorymentioning
confidence: 99%
“…From the implicitly learned patterns, predictions, prescriptions, and classifications can be made. Research has attempted to understand black boxes by identifying the internal structures responsible for predictions (Vidovic et al, 2015). This is considered to be a first step towards interpreting implicit patterns.…”
Section: From Patterns To Theorymentioning
confidence: 99%
“…Technically, explanation methods have been applied to a broad range of models ranging from simple bag-ofwords-type classifiers or logistic regression [13], [28] to feedforward or recurrent deep neural networks [9], [11], [13], [171] and, more recently, also to unsupervised learning models [85], [86]. At the same time, these methods were able to handle different types of data, including images [13], speech [20], text [10], [38], and structured data, such as molecules [162], [165] or genetic sequences [191].…”
Section: S U C C E S S F U L U S E S O F E X P L a N A T I O N mentioning
confidence: 99%
“…Over the last few years much work has been done on "black box" model explanation. Some of this work (Adler et al, 2016;Baehrens et al, 2010;Lou et al, 2013;Montavon et al, 2018;Simonyan et al, 2014;Vidovic et al, 2015) has been aimed specifically at experts. The interpretability of a model is a key element of a robust validation procedure in applications such as medicine or self-driving cars.…”
Section: A Case Study: Building a Music Recommendation Systemmentioning
confidence: 99%