2007 IEEE 11th International Conference on Computer Vision 2007
DOI: 10.1109/iccv.2007.4408855
|View full text |Cite
|
Sign up to set email alerts
|

Spectral Regression for Efficient Regularized Subspace Learning

Abstract: Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and MarginalFisher Analysis (MFA). However, a disadvantage of all these approaches is that their computations involve eigendecomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality red… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
252
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 327 publications
(252 citation statements)
references
References 17 publications
0
252
0
Order By: Relevance
“…Hence, the Spectral Regression framework reformulates the subspace learning problem as a two-step approach, namely graph embedding of the input data and regression for learning the parameters of projection functions [16]. Following this formulation, solely a small set of regularized least-square problems has to be solved, which runs with linear complexity.…”
Section: Spectral Regression (Sr)mentioning
confidence: 99%
“…Hence, the Spectral Regression framework reformulates the subspace learning problem as a two-step approach, namely graph embedding of the input data and regression for learning the parameters of projection functions [16]. Following this formulation, solely a small set of regularized least-square problems has to be solved, which runs with linear complexity.…”
Section: Spectral Regression (Sr)mentioning
confidence: 99%
“…The problem for artificial neural network is to understand the structure of an algorithm is very difficult, since too many factors may result in excessive filtration, and the best network structure can only be determined based on experiments [6]. Document [9] introduces a method to automatically detect and use of multi-level grades fusion multimode face and fingerprint recognition system, through the replacement of the score and the true score fusion to improve the face recognition system [16,17].…”
Section: Biological Feature Recognition Systemmentioning
confidence: 99%
“…In addition to the input node, each node is a processing unit with non-linear activation function. MLP application is called to train network back-propagation supervised learning techniques and characteristics in relation with ability of expression of forward MLP has also been authenticated [9]. As for Arbitrary function study, in the three layer networks, it is able to learn any function by adoption of any precision.…”
Section: Introductionmentioning
confidence: 99%
“…The input features were obtained by concatenation of the geometric and appearance features. Due to the excessive number of the features, the Spectral Regression (SR) (Cai et al 2007) was applied to select the most relevant features for the intensity estimation of each AU. The intensity classification was performed using AU-specific SVMs.…”
Section: Intensity Estimation Of Facial Expressionsmentioning
confidence: 99%