2005
DOI: 10.1007/10984697_13
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Discriminant Learning with Application to Face Recognition

Abstract: Abstract. When applied to high-dimensional pattern classification tasks such as face recognition, traditional kernel discriminant analysis methods often suffer from two problems: 1) small training sample size compared to the dimensionality of the sample (or mapped kernel feature) space, and 2) high computational complexity. In this chapter, we introduce a new kernel discriminant learning method, which attempts to deal with the two problems by using regularization and subspace decomposition techniques. The prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 37 publications
0
9
0
Order By: Relevance
“…In other words, we can observe the changes of the margins of the training examples at every boosting iteration, and consider it convergent when the margins of most training examples stop increasing or are increasing slowly. Our experiments indicate that this approach works well in many cases (see [51] for details). However, as mentioned earlier, the margin theory alone is insufficient to explain the behaviors of boosting [40], [45], [46].…”
Section: Some Discussion On the Convergence Of Boostingmentioning
confidence: 58%
See 2 more Smart Citations
“…In other words, we can observe the changes of the margins of the training examples at every boosting iteration, and consider it convergent when the margins of most training examples stop increasing or are increasing slowly. Our experiments indicate that this approach works well in many cases (see [51] for details). However, as mentioned earlier, the margin theory alone is insufficient to explain the behaviors of boosting [40], [45], [46].…”
Section: Some Discussion On the Convergence Of Boostingmentioning
confidence: 58%
“…It is therefore unrealistic to expect that the heuristic approach can accurately estimate the optimal value of . For example, it is found in our experiments that JD-LDA with the best found often yielded much better cumulative margin distributions than its boosting version [51].…”
Section: Some Discussion On the Convergence Of Boostingmentioning
confidence: 72%
See 1 more Smart Citation
“…Training vectors are mapped into a higher-dimensional space by a kernel function that describes a hyperplane, which consists of an optimal separation of the dataset into discrete classes according to the training samples. The four most widely used kernel functions are linear, polynomial, RBF, and sigmoid [77,78]. Table 3 lists the mathematical equations of these kernel functions.…”
Section: Support Vector Machinementioning
confidence: 99%
“…The advantage of the QCQP is that it is faster than SDP and computationally affordable for moderate sized problems. Discriminant analysis provides an effective criterion to assess the feature space and consequently has been studied in various approaches to learn the kernel [15,55,46,89,94]. In [83], authors proposed an optimization approach that maximizes the linear discriminant analaysis's objective in the feature space which leads to finding the parameter of Gaussian kernel.…”
Section: Feature Space Conditionsmentioning
confidence: 99%