2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7965902
|View full text |Cite
|
Sign up to set email alerts
|

Dictionary learning with equiprobable matching pursuit

Abstract: Abstract-Sparse signal representations based on linear combinations of learned atoms have been used to obtain state-ofthe-art results in several practical signal processing applications. Approximation methods are needed to process high-dimensional signals in this way because the problem to calculate optimal atoms for sparse coding is NP-hard. Here we study greedy algorithms for unsupervised learning of dictionaries of shiftinvariant atoms and propose a new method where each atom is selected with the same proba… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 37 publications
(50 reference statements)
1
4
0
Order By: Relevance
“…This shows that although slightly outperforms (which itself is more efficient than , see Figure 2B), proves to be closer to the optimal solution given by . Moreover, we replicated in the result of [23] that while homeostasis was essential in improving unsupervised learning, the coding algorithm (MP vs. OMP) mattered relatively little (see Annex ()). Also, we verified the dependence of this efficiency with respect to different hyperparameters (as we did in Figure 2B).…”
Section: Unsupervised Learning and The Optimal Representation Of Isupporting
confidence: 54%
See 2 more Smart Citations
“…This shows that although slightly outperforms (which itself is more efficient than , see Figure 2B), proves to be closer to the optimal solution given by . Moreover, we replicated in the result of [23] that while homeostasis was essential in improving unsupervised learning, the coding algorithm (MP vs. OMP) mattered relatively little (see Annex ()). Also, we verified the dependence of this efficiency with respect to different hyperparameters (as we did in Figure 2B).…”
Section: Unsupervised Learning and The Optimal Representation Of Isupporting
confidence: 54%
“…Second, we will propose a simplification of this homeostasis algorithm based on the activation probability of each neuron, thanks to the control of the slope of its corresponding Rectifying Linear Unit (ReLU). We show that it yields similar quantitative results as the full homeostasis algorithm and that it converges more rapidly than classical methods [10,23]. We designed our computational architecture to be able to quantitatively cross-validate for every single hyperparameter.…”
Section: Introduction: Reconciling Competition and Cooperationmentioning
confidence: 92%
See 1 more Smart Citation
“…However, it needs to be noted that the processing time per signal samples decreases logarithmically for increasing signal length. 51 The sparsity level hyperparameter defines the number of atomic events that are generated for the sparse representation, where a lower sparsity level represents a lower number of atomic events. Therefore, the computational cost under the scenario of 5% sparsity level is half of the scenario of 10% sparsity level.…”
Section: Resultsmentioning
confidence: 99%
“…Matching Pursuit (MP) is a sparse approximation algorithm which finds the "best matching" [13], [19]. A extensions of MP is orthogonal MP (OMP) [15], [20], which is applicable to high-dimensional signals.…”
Section: Introductionmentioning
confidence: 99%