Advances in Neural Information Processing Systems 19 2007
DOI: 10.7551/mitpress/7503.003.0027
|View full text |Cite
|
Sign up to set email alerts
|

Similarity by Composition

Abstract: We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S 1 is "similar" to a signal S 2 if it is "easy" to compose S 1 from few large contiguous chunks of S 2. Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S 1 is to S 2. This induces a local similarity score at every point in the signal, based on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 33 publications
(9 citation statements)
references
References 15 publications
(13 reference statements)
0
8
0
Order By: Relevance
“…The nonnegativity constraint enforces the AE to learn additive part–based representation of its input data, while the sparsity constraint enforces the average activation of each hidden unit over the entire training data set to be infinitesimal to improve the probability of linear separability. 21 As suggested by Hosseini-Asl et al , 22 imposing the nonnegativity constraint on AE results in more precise data codes during the greedy layer-wise unsupervised training and improved accuracy after the supervised fine-tuning. Mathematically, the loss function of Equation 3 is extended by the addition of 2 penalty terms to lower the number of negative coefficients and compel sparsity of the NCSAE.…”
Section: Methodsmentioning
confidence: 97%
“…The nonnegativity constraint enforces the AE to learn additive part–based representation of its input data, while the sparsity constraint enforces the average activation of each hidden unit over the entire training data set to be infinitesimal to improve the probability of linear separability. 21 As suggested by Hosseini-Asl et al , 22 imposing the nonnegativity constraint on AE results in more precise data codes during the greedy layer-wise unsupervised training and improved accuracy after the supervised fine-tuning. Mathematically, the loss function of Equation 3 is extended by the addition of 2 penalty terms to lower the number of negative coefficients and compel sparsity of the NCSAE.…”
Section: Methodsmentioning
confidence: 97%
“…For this, we used the Gabor receptive field tuning experiments and computed the mean spike count of each neuron during the full 250ms presentation of each of the 81 Gabor locations, averaged over the 45 repetitions of each location. This generates a (pre-encoding) 81-dimensional receptive field response vector for each neuron i , which is then encoded into a D -dimensional HV via (Hernández-Cano et al, 2021; Rahimi and Recht, 2007) where B is a 81-by- D random matrix with i.i.d. standard normal elements, is a random vector with i.i.d.…”
Section: Methodsmentioning
confidence: 99%
“…For example, it can be applied to intuitive physics when h is a complete object trajectory and s is the initial movement of an object (e.g., Battaglia et al, 2013; Hamrick et al, 2015; Sanborn et al, 2013), language production when h is the next word in a sentence and s are the preceding words (e.g., Chater & Manning, 2006; Levy et al, 2008), and common-sense reasoning about other minds when h is a social goal of other agents and s is a sequence of actions performed by those agents (e.g., Baker et al, 2008, 2009). Similarly, Bayesian models have also been successfully implemented in explaining effects in other areas of psychology such as vision (e.g., Yuille & Kersten, 2006), motor control (Körding & Wolpert, 2004), causal reasoning (e.g., Abbott & Griffiths, 2011; Bramley et al, 2017), reading (Norris, 2006), and learning (e.g., Courville & Daw, 2007; Gershman et al, 2010).…”
Section: The Autocorrelated Bayesian Samplermentioning
confidence: 99%