2022
DOI: 10.1109/tpami.2020.3020738
|View full text |Cite
|
Sign up to set email alerts
|

Building and Interpreting Deep Similarity Models

Abstract: Many learning algorithms such as kernel machines, nearest neighbors, clustering, or anomaly detection, are based on distances or similarities. Before similarities are used for training an actual machine learning model, we would like to verify that they are bound to meaningful patterns in the data. In this paper, we propose to make similarities interpretable by augmenting them with an explanation. We develop BiLRP, a scalable and theoretically founded method to systematically decompose the output of an already … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 62 publications
0
20
0
Order By: Relevance
“…Taylor expansions are a well-known mathematical framework to decompose a function into a series of terms associated with different degrees and combinations of input variables. Unlike Shapley values that evaluate the function f (x) multiple times, the Taylor expansion framework for explaining an ML model [13], [19], [42] evaluates the function once at some reference point x and assigns feature contributions by locally extracting the gradient (and higher order derivatives). Specifically, the Taylor expansion of some smooth and differentiable function f : R d → R at some reference point x is given by where ∇f and ∇ 2 f denote the gradient and the Hessian, respectively, and .…”
Section: B Taylor Decompositionmentioning
confidence: 99%
See 3 more Smart Citations
“…Taylor expansions are a well-known mathematical framework to decompose a function into a series of terms associated with different degrees and combinations of input variables. Unlike Shapley values that evaluate the function f (x) multiple times, the Taylor expansion framework for explaining an ML model [13], [19], [42] evaluates the function once at some reference point x and assigns feature contributions by locally extracting the gradient (and higher order derivatives). Specifically, the Taylor expansion of some smooth and differentiable function f : R d → R at some reference point x is given by where ∇f and ∇ 2 f denote the gradient and the Hessian, respectively, and .…”
Section: B Taylor Decompositionmentioning
confidence: 99%
“…For illustration, we present the BiLRP method [42], which assumes that we have a similarity model built as a dot product on some hidden…”
Section: A Explaining Beyond Heatmapsmentioning
confidence: 99%
See 2 more Smart Citations
“…The next step in our analysis will be to consider the how socioeconomic factors that determined the production and distribution of those treatises may have altered the path of knowledge transmission. In addition to this contextual data, we will also use machine learning techniques to expand the dataset to include other "knowledge atoms, " specifically scientific images and computational astronomic tables extracted from the same textbooks 41 .…”
Section: Same Original Part Layer Saorp Plays a Fundamental Role In mentioning
confidence: 99%