2016
DOI: 10.1016/j.neucom.2015.12.109
|View full text |Cite
|
Sign up to set email alerts
|

Learning matrix quantization and relevance learning based on Schatten-p-norms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…This concept allows to think about new processing steps inside and between capsules and hence to go beyond vectorial operations. This is however not a new concept in machine learning, but rather a well established property of PBs: a simple example is to use a matrix as the input to a VQ network, then the prototypes are matrices as well and hence, a respective matrix dissimilarity has to be chosen [88]. However, using more complex dissimilarity measures it is also possible to model prototypes as affine subspaces of the input space [89,90].…”
Section: Arbitrary Input Dimensions and Structured Data For Nns And V...mentioning
confidence: 99%
“…This concept allows to think about new processing steps inside and between capsules and hence to go beyond vectorial operations. This is however not a new concept in machine learning, but rather a well established property of PBs: a simple example is to use a matrix as the input to a VQ network, then the prototypes are matrices as well and hence, a respective matrix dissimilarity has to be chosen [88]. However, using more complex dissimilarity measures it is also possible to model prototypes as affine subspaces of the input space [89,90].…”
Section: Arbitrary Input Dimensions and Structured Data For Nns And V...mentioning
confidence: 99%
“…21 However, the LVQ neural network cannot easily obtain ideal recognition accuracy and low computational complexity when the sample space is relatively large. 24 Therefore, an I-LVQ neural network is considered and applied to recognize driving cycle online.…”
Section: Dcr Model Based On the I-lvq Neural Networkmentioning
confidence: 99%
“…Hofmann et al (2015) propose kernel robust soft LVQ (RSLVQ) which is capable of classifying complex data sets [23]. They describe a general gram matrix including low rank approximations and how the models could be implemented in this approach.…”
Section: Some Variants Of Learning Vector Quantizationmentioning
confidence: 99%
“…They describe a general gram matrix including low rank approximations and how the models could be implemented in this approach. Bohnsack et al (2016) improve LVQ method in order to get the classification of matrix data based on matrix norms [24]. In general learning algorithms are based on vectorial approach, the contribution of the article is to work for matrix norms.…”
Section: Some Variants Of Learning Vector Quantizationmentioning
confidence: 99%
See 1 more Smart Citation