2014
DOI: 10.1002/mrm.25240
|View full text |Cite
|
Sign up to set email alerts
|

Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components

Abstract: Purpose To apply the low-rank plus sparse (L+S) matrix decomposition model to reconstruct undersampled dynamic MRI as a superposition of background and dynamic components in various problems of clinical interest. Theory and Methods The L+S model is natural to represent dynamic MRI data. Incoherence between k−t space (acquisition) and the singular vectors of L and the sparse domain of S is required to reconstruct undersampled data. Incoherence between L and S is required for robust separation of background an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
677
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 580 publications
(679 citation statements)
references
References 36 publications
2
677
0
Order By: Relevance
“…According to the theory of RPCA, Otazo et al [13] used the nuclear norm of and the 1 norm of instead of the rank of and the 0 norm of in (1), respectively, and proposed a convex optimization problem ( + model):…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…According to the theory of RPCA, Otazo et al [13] used the nuclear norm of and the 1 norm of instead of the rank of and the 0 norm of in (1), respectively, and proposed a convex optimization problem ( + model):…”
Section: Related Workmentioning
confidence: 99%
“…In [12], 3D cardiac MRI was reconstructed by combining + decomposition with prior knowledge. In [13], a new + model was proposed, which decomposed a series of dynamic MR images with temporal and spatial correlation into LR matrices and sparse matrices. As a result, the dynamic MRI could be divided into the foreground and background to help diagnosis.…”
Section: Introductionmentioning
confidence: 99%
“…The most direct approach for SVT is applying full SVD through svd and then soft-threshold the singular values. This approach is in practice used in many matrix learning problems according to the distributed code, e.g., Kalofolias, Bresson, Bronstein, and Vandergheynst (2014); Chi et al (2013); Parikh and Boyd (2013) ;Yang, Wang, Zhang, and Zhao (2013);Zhou, Liu, Wan, and Yu (2014); Zhou and Li (2014); Zhang et al (2017); Otazo, Candès, and Sodickson (2015); Goldstein, Studer, and Baraniuk (2015), to name a few. However, the built-in function svd is for full SVD of a dense matrix, and hence is very time-consuming and computationally expensive for large-scale problems.…”
Section: Introductionmentioning
confidence: 99%
“…Low-rank matrices and rank-one matrices are attractive because they need less storage space than general measurement matrices [17][18][19][20]. Indeed, if the measurement matrix is sparse, it takes less storage space and incurs less computational cost.…”
Section: Introductionmentioning
confidence: 99%