2013
DOI: 10.1016/j.patrec.2013.02.011
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized Estimation using distortion sensitive learning vector quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2014
2014

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…To combine the the class label information with NMF, we introduce the following matrix so that = (5) So that if and belong to the same class, there representations and will be the same, as shown in Figure. 1. Based on the new representation of and the squared Euclidean distance loss function function, our NMF-CLI algorithm with the class label information is reduced the following optimization problem:…”
Section: Nmf Using Class Label Informationmentioning
confidence: 99%
See 1 more Smart Citation
“…To combine the the class label information with NMF, we introduce the following matrix so that = (5) So that if and belong to the same class, there representations and will be the same, as shown in Figure. 1. Based on the new representation of and the squared Euclidean distance loss function function, our NMF-CLI algorithm with the class label information is reduced the following optimization problem:…”
Section: Nmf Using Class Label Informationmentioning
confidence: 99%
“…Matrix factorization technique is having attention as a good method to do data representation [2]. Various methods of representation have been introduced by applying different formulas, including Singular Value Decomposition [3], Principal Component Analysis [4], and Vector Quantization [5]. Matrix factorization means seeking two matrix factors, such that their product can approximate the matrix [6].…”
Section: Introductionmentioning
confidence: 99%
“…In contrast with the black box property of SVM and its semi-supervised variants, prototype-based methods enjoy a wide popularity in various application domains (Grbovic and Vucetic, 2013;Ortiz-Bayliss et al, 2013) due to their intuitive and simple behavior: they represent their decision in terms of typical representatives (referred to as prototypes) in the input space and classification is based on the distances of data to prototypes. Prototypes can be directly inspected by domain experts in the field in the same way as data points.…”
Section: Introductionmentioning
confidence: 99%