2015
DOI: 10.1016/j.neucom.2014.12.051
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Low-Rank Representation for face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(14 citation statements)
references
References 52 publications
0
14
0
Order By: Relevance
“…To make the low-rank model effectively deal with the nonlinear structure of data, [11] proposed the kernel low-rank representation (KLRR) graph for semisupervised classification by using kernel trick. As a nonlinear extension of LRR, KLRR also showed excellent performance in face recognition [21].…”
Section: Introductionmentioning
confidence: 99%
“…To make the low-rank model effectively deal with the nonlinear structure of data, [11] proposed the kernel low-rank representation (KLRR) graph for semisupervised classification by using kernel trick. As a nonlinear extension of LRR, KLRR also showed excellent performance in face recognition [21].…”
Section: Introductionmentioning
confidence: 99%
“…For any two points x and x , we use a kernel function (x , x ) = ⟨Φ(x ), Φ(x )⟩ to map the data into a kernel feature space. Some commonly used kernels are including the Gaussian radial basis function (RBF) kernel (x, y) = exp(−‖x − y‖ 2 / 2 ), polynomial kernel (x, y) = ( + ⟨x, y⟩) , and sigmoid kernel (x, y) = tanh(⟨x, y⟩ + ) [2,31].…”
Section: Kernel Sdamentioning
confidence: 99%
“…By applying the dimensionality reduction to F , (9) can be modified as follows: mincPTϕ)(xPTΦBbold-italicc2+λ1double-struckdbold-italicc2+λ2bold-italiccl0 where bold-italicP=falsefalse{p1,thickmathspacep2,,psfalsefalse}RD×s is the transformation matrix of F . As mentioned in [39], the transformation matrix is related to the images such that dot products of images can be replaced by the kernel. By applying the representation of the transformation matrix in a kernel‐based dimensionality reduction method in KPCA, the projection vector is a linear combination of images in F , as follows: pi=j=1mαijϕfalse(bjfalse) where α i = [ α i 1 , α i 2 , …, α in ] T is called the pseudo‐transformation vector corresponding to the j th transformation vector.…”
Section: Kernel Locality‐constrained Sparse Codingmentioning
confidence: 99%