Proceedings of the 2018 VII International Conference on Network, Communication and Computing 2018
DOI: 10.1145/3301326.3301385
|View full text |Cite
|
Sign up to set email alerts
|

Robust Regression for Image Alignment via Subspace Recovery Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…It can also tackle the sparse errors in data points which are highly correlated across all data points in the images. In [8,38], the joint dictionary learning methods are used but the issue of the affine transformation is not considered, [45,46], while used an image transformation without considering the L 2,1 and weighted nuclear norms. Moreover, the L 2,1 regularizer is considered as the rotational invariant of the L 1 norm and handles the collineraity between features, which is preferred to overcome the difficulty of robustness to outliers [47,48].…”
Section: Problem Formulationmentioning
confidence: 99%
“…It can also tackle the sparse errors in data points which are highly correlated across all data points in the images. In [8,38], the joint dictionary learning methods are used but the issue of the affine transformation is not considered, [45,46], while used an image transformation without considering the L 2,1 and weighted nuclear norms. Moreover, the L 2,1 regularizer is considered as the rotational invariant of the L 1 norm and handles the collineraity between features, which is preferred to overcome the difficulty of robustness to outliers [47,48].…”
Section: Problem Formulationmentioning
confidence: 99%
“…ough several RPCA algorithms exist to deal with the potential impact of outliers and heavy sparse noises, effective and efficient algorithms need to be developed. To mitigate this issue, the authors of [12,13] developed robust algorithms, which can well handle the grossly corrupted data. However, in very highdimensional cases such as image feature extraction, recovery, and alignment, it lacks better performance and low computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…To tackle the dilemma of overestimated ranks, the authors of [19,20] suggested the RPCA algorithms, which consider the decomposition of the original images into two broad components. Likassa et al [12,13] considered a novel algorithm to tackle the misalignment dilemma, which is designed to find the low-rank component from the illuminated data. Oh et al [21] presented a rank minimization algorithm which simultaneously aligns the low-input dynamic change images and detects the outliers.…”
mentioning
confidence: 99%
“…To overcome this drawback, a myriad of robust algorithms have been addressed to deal with outliers and heavy sparse noises in high-dimensional images. Likassa et al [11][12][13] considered new robust algorithms via affine transformations and L 2,1 norms for image recovery and alignment which boosted the performance of the algorithms. Moreover, [14][15][16][17] proposed an efficient extension of RPCA using affine transformations.…”
Section: Introductionmentioning
confidence: 99%
“…e recent papers also addressed the issue of sparsity for image representation and decomposition [40,41]. Likassa and Fang [12] addressed a low-rank sparse subspace representation for robust regression (LRS-RR) approach to find the clean low-rank part by low-rank subspace recovery along with regression to deal with errors or outliers lying in the corrupted disjoint subspaces. e main challenge in image recovery and head pose estimation is to tackle the potential impact of outliers and heavy sparse noises.…”
Section: Introductionmentioning
confidence: 99%