2014
DOI: 10.1109/tsp.2014.2338077
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Algorithms for <formula formulatype="inline"> <tex Notation="TeX">$L_{1}$</tex></formula>-subspace Signal Processing

Abstract: We describe ways to define and calculate -norm signal subspaces that are less sensitive to outlying data than -calculated subspaces. We start with the computation of the maximum-projection principal component of a data matrix containing signal samples of dimension . We show that while the general problem is formally NP-hard in asymptotically large , , the case of engineering interest of fixed dimension and asymptotically large sample size is not. In particular, for the case where the sample size is less than t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 142 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…However, because the L2-norm enlarges the influence of outliers, the traditional functional principal components analysis method is sensitive to outliers. On the other hand, in regard to multivariate data, relevant research of principal component analysis methods [30][31][32][33][34][35][36][37] has shown that the principal component analysis method of L1-norm for multivariate data has a better robustness than that of the L2-norm. In [30], Kwak (2008) proposed an L1-PCA optimization model based on L1-norm maximization for multivariable data, i.e., W L1 = argmax…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, because the L2-norm enlarges the influence of outliers, the traditional functional principal components analysis method is sensitive to outliers. On the other hand, in regard to multivariate data, relevant research of principal component analysis methods [30][31][32][33][34][35][36][37] has shown that the principal component analysis method of L1-norm for multivariate data has a better robustness than that of the L2-norm. In [30], Kwak (2008) proposed an L1-PCA optimization model based on L1-norm maximization for multivariable data, i.e., W L1 = argmax…”
Section: Introductionmentioning
confidence: 99%
“…where d = rank(X); this solution has a low performance degradation, and is close to L2-PCA, but the cost is that it is not as robust as that in [30]. The work in [32] offered an algorithm for exact calculation…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, low-rank matrix recovery [24][25][26][27][28][29][30][31][32][33][34][35][36] has attracted much attention in the areas of face recognition, image recovery, de-noising, and data reconstruction. The main purpose of low-rank matrix reconstruction is to recover the low-rank matrix from a large, but sparse error data set.…”
Section: Introductionmentioning
confidence: 99%
“…In different applications, this is also called sparse and low-rank matrix decomposition, robust principle component analysis (RPCA), and rank-sparsity incoherence. The traditional low-rank matrix approximation methods only consider the low-rank approximation of a single matrix, such as RPCA [24], principal component analysis based on 1 l -norm (PCAL1) [25,26], and the recently developed low-rank matrix theory [27,28]. These methods can be considered as generalized cases of the sparse representation (SR) [29,30] method.…”
Section: Introductionmentioning
confidence: 99%
“…First, apply the principal component analysis on the high dimensional vectors to perform the dimension reductions for the machine learning and the data analysis applications [8], [20]. This includes the pattern recognition applications [15] of the subspace mapping [1] and the subspace learning [14]. For example, it can be applied to the optimal design of the agile supply chain network under an uncertainty [12] and the reduction of the total number of the dimensions of a logistic regression model with the continuous covariates and the avoidance of the multi-collinearity [4].…”
mentioning
confidence: 99%